00:00:00.000 Started by upstream project "autotest-per-patch" build number 132383 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.016 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.017 The recommended git tool is: git 00:00:00.017 using credential 00000000-0000-0000-0000-000000000002 00:00:00.019 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.037 Fetching changes from the remote Git repository 00:00:00.040 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.067 Using shallow fetch with depth 1 00:00:00.067 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.067 > git --version # timeout=10 00:00:00.114 > git --version # 'git version 2.39.2' 00:00:00.114 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.180 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.180 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.806 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.818 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.830 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.830 > git config core.sparsecheckout # timeout=10 00:00:02.844 > git read-tree -mu HEAD # timeout=10 00:00:02.859 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.879 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.879 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.966 [Pipeline] Start of Pipeline 00:00:02.979 [Pipeline] library 00:00:02.980 Loading library shm_lib@master 00:00:02.981 Library shm_lib@master is cached. Copying from home. 00:00:02.992 [Pipeline] node 00:00:03.003 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.004 [Pipeline] { 00:00:03.011 [Pipeline] catchError 00:00:03.012 [Pipeline] { 00:00:03.025 [Pipeline] wrap 00:00:03.034 [Pipeline] { 00:00:03.044 [Pipeline] stage 00:00:03.046 [Pipeline] { (Prologue) 00:00:03.064 [Pipeline] echo 00:00:03.065 Node: VM-host-WFP7 00:00:03.072 [Pipeline] cleanWs 00:00:03.082 [WS-CLEANUP] Deleting project workspace... 00:00:03.082 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.089 [WS-CLEANUP] done 00:00:03.281 [Pipeline] setCustomBuildProperty 00:00:03.366 [Pipeline] httpRequest 00:00:03.801 [Pipeline] echo 00:00:03.802 Sorcerer 10.211.164.20 is alive 00:00:03.812 [Pipeline] retry 00:00:03.813 [Pipeline] { 00:00:03.827 [Pipeline] httpRequest 00:00:03.832 HttpMethod: GET 00:00:03.832 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.833 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.845 Response Code: HTTP/1.1 200 OK 00:00:03.846 Success: Status code 200 is in the accepted range: 200,404 00:00:03.846 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.276 [Pipeline] } 00:00:06.286 [Pipeline] // retry 00:00:06.291 [Pipeline] sh 00:00:06.572 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.586 [Pipeline] httpRequest 00:00:07.164 [Pipeline] echo 00:00:07.166 Sorcerer 10.211.164.20 is alive 00:00:07.173 [Pipeline] retry 00:00:07.175 [Pipeline] { 00:00:07.187 [Pipeline] httpRequest 00:00:07.191 HttpMethod: GET 00:00:07.192 URL: http://10.211.164.20/packages/spdk_0383e688b2626e5f24bd789be9d7084d3cd6bdef.tar.gz 00:00:07.193 Sending request to url: http://10.211.164.20/packages/spdk_0383e688b2626e5f24bd789be9d7084d3cd6bdef.tar.gz 00:00:07.202 Response Code: HTTP/1.1 200 OK 00:00:07.202 Success: Status code 200 is in the accepted range: 200,404 00:00:07.203 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_0383e688b2626e5f24bd789be9d7084d3cd6bdef.tar.gz 00:01:46.678 [Pipeline] } 00:01:46.700 [Pipeline] // retry 00:01:46.708 [Pipeline] sh 00:01:46.991 + tar --no-same-owner -xf spdk_0383e688b2626e5f24bd789be9d7084d3cd6bdef.tar.gz 00:01:49.539 [Pipeline] sh 00:01:49.862 + git -C spdk log --oneline -n5 00:01:49.862 0383e688b bdev/nvme: Fix race between reset and qpair creation/deletion 00:01:49.862 a5dab6cf7 test/nvme/xnvme: Make sure nvme selected for tests is not used 00:01:49.862 876509865 test/nvme/xnvme: Test all conserve_cpu variants 00:01:49.862 a25b16198 test/nvme/xnvme: Enable polling in nvme driver 00:01:49.862 bb53e3ad9 test/nvme/xnvme: Drop null_blk 00:01:49.881 [Pipeline] writeFile 00:01:49.896 [Pipeline] sh 00:01:50.184 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:50.216 [Pipeline] sh 00:01:50.504 + cat autorun-spdk.conf 00:01:50.504 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:50.504 SPDK_RUN_ASAN=1 00:01:50.504 SPDK_RUN_UBSAN=1 00:01:50.504 SPDK_TEST_RAID=1 00:01:50.504 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:50.512 RUN_NIGHTLY=0 00:01:50.514 [Pipeline] } 00:01:50.529 [Pipeline] // stage 00:01:50.544 [Pipeline] stage 00:01:50.546 [Pipeline] { (Run VM) 00:01:50.559 [Pipeline] sh 00:01:50.844 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:50.844 + echo 'Start stage prepare_nvme.sh' 00:01:50.844 Start stage prepare_nvme.sh 00:01:50.844 + [[ -n 7 ]] 00:01:50.844 + disk_prefix=ex7 00:01:50.844 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:50.844 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:50.844 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:50.844 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:50.844 ++ SPDK_RUN_ASAN=1 00:01:50.844 ++ SPDK_RUN_UBSAN=1 00:01:50.844 ++ SPDK_TEST_RAID=1 00:01:50.844 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:50.844 ++ RUN_NIGHTLY=0 00:01:50.844 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:50.844 + nvme_files=() 00:01:50.844 + declare -A nvme_files 00:01:50.844 + backend_dir=/var/lib/libvirt/images/backends 00:01:50.844 + nvme_files['nvme.img']=5G 00:01:50.844 + nvme_files['nvme-cmb.img']=5G 00:01:50.844 + nvme_files['nvme-multi0.img']=4G 00:01:50.844 + nvme_files['nvme-multi1.img']=4G 00:01:50.844 + nvme_files['nvme-multi2.img']=4G 00:01:50.844 + nvme_files['nvme-openstack.img']=8G 00:01:50.844 + nvme_files['nvme-zns.img']=5G 00:01:50.844 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:50.844 + (( SPDK_TEST_FTL == 1 )) 00:01:50.844 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:50.844 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:50.844 + for nvme in "${!nvme_files[@]}" 00:01:50.844 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:50.844 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:50.844 + for nvme in "${!nvme_files[@]}" 00:01:50.844 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:50.844 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:50.844 + for nvme in "${!nvme_files[@]}" 00:01:50.844 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:50.844 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:50.844 + for nvme in "${!nvme_files[@]}" 00:01:50.844 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:50.844 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:50.844 + for nvme in "${!nvme_files[@]}" 00:01:50.844 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:50.844 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:50.844 + for nvme in "${!nvme_files[@]}" 00:01:50.844 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:50.844 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:50.844 + for nvme in "${!nvme_files[@]}" 00:01:50.844 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:51.104 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:51.104 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:51.104 + echo 'End stage prepare_nvme.sh' 00:01:51.104 End stage prepare_nvme.sh 00:01:51.116 [Pipeline] sh 00:01:51.400 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:51.400 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:01:51.400 00:01:51.400 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:51.400 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:51.400 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:51.400 HELP=0 00:01:51.400 DRY_RUN=0 00:01:51.400 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:01:51.400 NVME_DISKS_TYPE=nvme,nvme, 00:01:51.400 NVME_AUTO_CREATE=0 00:01:51.400 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:01:51.400 NVME_CMB=,, 00:01:51.400 NVME_PMR=,, 00:01:51.400 NVME_ZNS=,, 00:01:51.400 NVME_MS=,, 00:01:51.400 NVME_FDP=,, 00:01:51.400 SPDK_VAGRANT_DISTRO=fedora39 00:01:51.400 SPDK_VAGRANT_VMCPU=10 00:01:51.400 SPDK_VAGRANT_VMRAM=12288 00:01:51.400 SPDK_VAGRANT_PROVIDER=libvirt 00:01:51.400 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:51.400 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:51.401 SPDK_OPENSTACK_NETWORK=0 00:01:51.401 VAGRANT_PACKAGE_BOX=0 00:01:51.401 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:51.401 FORCE_DISTRO=true 00:01:51.401 VAGRANT_BOX_VERSION= 00:01:51.401 EXTRA_VAGRANTFILES= 00:01:51.401 NIC_MODEL=virtio 00:01:51.401 00:01:51.401 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:51.401 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:53.309 Bringing machine 'default' up with 'libvirt' provider... 00:01:53.879 ==> default: Creating image (snapshot of base box volume). 00:01:53.879 ==> default: Creating domain with the following settings... 00:01:53.879 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732101096_403f4132c70bd2734fd0 00:01:53.879 ==> default: -- Domain type: kvm 00:01:53.879 ==> default: -- Cpus: 10 00:01:53.879 ==> default: -- Feature: acpi 00:01:53.879 ==> default: -- Feature: apic 00:01:53.879 ==> default: -- Feature: pae 00:01:53.879 ==> default: -- Memory: 12288M 00:01:53.879 ==> default: -- Memory Backing: hugepages: 00:01:53.879 ==> default: -- Management MAC: 00:01:53.880 ==> default: -- Loader: 00:01:53.880 ==> default: -- Nvram: 00:01:53.880 ==> default: -- Base box: spdk/fedora39 00:01:53.880 ==> default: -- Storage pool: default 00:01:53.880 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732101096_403f4132c70bd2734fd0.img (20G) 00:01:53.880 ==> default: -- Volume Cache: default 00:01:53.880 ==> default: -- Kernel: 00:01:53.880 ==> default: -- Initrd: 00:01:53.880 ==> default: -- Graphics Type: vnc 00:01:53.880 ==> default: -- Graphics Port: -1 00:01:53.880 ==> default: -- Graphics IP: 127.0.0.1 00:01:53.880 ==> default: -- Graphics Password: Not defined 00:01:53.880 ==> default: -- Video Type: cirrus 00:01:53.880 ==> default: -- Video VRAM: 9216 00:01:53.880 ==> default: -- Sound Type: 00:01:53.880 ==> default: -- Keymap: en-us 00:01:53.880 ==> default: -- TPM Path: 00:01:53.880 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:53.880 ==> default: -- Command line args: 00:01:53.880 ==> default: -> value=-device, 00:01:53.880 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:53.880 ==> default: -> value=-drive, 00:01:53.880 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:01:53.880 ==> default: -> value=-device, 00:01:53.880 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:53.880 ==> default: -> value=-device, 00:01:53.880 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:53.880 ==> default: -> value=-drive, 00:01:53.880 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:53.880 ==> default: -> value=-device, 00:01:53.880 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:53.880 ==> default: -> value=-drive, 00:01:53.880 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:53.880 ==> default: -> value=-device, 00:01:53.880 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:53.880 ==> default: -> value=-drive, 00:01:53.880 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:53.880 ==> default: -> value=-device, 00:01:53.880 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:54.140 ==> default: Creating shared folders metadata... 00:01:54.140 ==> default: Starting domain. 00:01:55.522 ==> default: Waiting for domain to get an IP address... 00:02:13.675 ==> default: Waiting for SSH to become available... 00:02:13.675 ==> default: Configuring and enabling network interfaces... 00:02:18.958 default: SSH address: 192.168.121.54:22 00:02:18.958 default: SSH username: vagrant 00:02:18.958 default: SSH auth method: private key 00:02:21.500 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:29.627 ==> default: Mounting SSHFS shared folder... 00:02:32.158 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:32.158 ==> default: Checking Mount.. 00:02:33.093 ==> default: Folder Successfully Mounted! 00:02:33.093 ==> default: Running provisioner: file... 00:02:34.476 default: ~/.gitconfig => .gitconfig 00:02:34.736 00:02:34.736 SUCCESS! 00:02:34.736 00:02:34.736 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:34.736 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:34.736 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:34.736 00:02:34.747 [Pipeline] } 00:02:34.763 [Pipeline] // stage 00:02:34.774 [Pipeline] dir 00:02:34.774 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:34.776 [Pipeline] { 00:02:34.791 [Pipeline] catchError 00:02:34.793 [Pipeline] { 00:02:34.807 [Pipeline] sh 00:02:35.093 + vagrant ssh-config --host vagrant 00:02:35.093 + sed -ne /^Host/,$p 00:02:35.093 + tee ssh_conf 00:02:38.386 Host vagrant 00:02:38.387 HostName 192.168.121.54 00:02:38.387 User vagrant 00:02:38.387 Port 22 00:02:38.387 UserKnownHostsFile /dev/null 00:02:38.387 StrictHostKeyChecking no 00:02:38.387 PasswordAuthentication no 00:02:38.387 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:38.387 IdentitiesOnly yes 00:02:38.387 LogLevel FATAL 00:02:38.387 ForwardAgent yes 00:02:38.387 ForwardX11 yes 00:02:38.387 00:02:38.401 [Pipeline] withEnv 00:02:38.403 [Pipeline] { 00:02:38.417 [Pipeline] sh 00:02:38.699 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:38.699 source /etc/os-release 00:02:38.699 [[ -e /image.version ]] && img=$(< /image.version) 00:02:38.699 # Minimal, systemd-like check. 00:02:38.699 if [[ -e /.dockerenv ]]; then 00:02:38.699 # Clear garbage from the node's name: 00:02:38.699 # agt-er_autotest_547-896 -> autotest_547-896 00:02:38.699 # $HOSTNAME is the actual container id 00:02:38.699 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:38.699 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:38.700 # We can assume this is a mount from a host where container is running, 00:02:38.700 # so fetch its hostname to easily identify the target swarm worker. 00:02:38.700 container="$(< /etc/hostname) ($agent)" 00:02:38.700 else 00:02:38.700 # Fallback 00:02:38.700 container=$agent 00:02:38.700 fi 00:02:38.700 fi 00:02:38.700 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:38.700 00:02:38.970 [Pipeline] } 00:02:38.987 [Pipeline] // withEnv 00:02:38.994 [Pipeline] setCustomBuildProperty 00:02:39.010 [Pipeline] stage 00:02:39.013 [Pipeline] { (Tests) 00:02:39.030 [Pipeline] sh 00:02:39.309 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:39.581 [Pipeline] sh 00:02:39.863 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:40.139 [Pipeline] timeout 00:02:40.139 Timeout set to expire in 1 hr 30 min 00:02:40.141 [Pipeline] { 00:02:40.158 [Pipeline] sh 00:02:40.445 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:41.019 HEAD is now at 0383e688b bdev/nvme: Fix race between reset and qpair creation/deletion 00:02:41.031 [Pipeline] sh 00:02:41.314 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:41.588 [Pipeline] sh 00:02:41.872 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:42.150 [Pipeline] sh 00:02:42.435 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:42.695 ++ readlink -f spdk_repo 00:02:42.695 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:42.695 + [[ -n /home/vagrant/spdk_repo ]] 00:02:42.695 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:42.695 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:42.695 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:42.695 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:42.695 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:42.695 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:42.695 + cd /home/vagrant/spdk_repo 00:02:42.695 + source /etc/os-release 00:02:42.695 ++ NAME='Fedora Linux' 00:02:42.695 ++ VERSION='39 (Cloud Edition)' 00:02:42.695 ++ ID=fedora 00:02:42.695 ++ VERSION_ID=39 00:02:42.695 ++ VERSION_CODENAME= 00:02:42.695 ++ PLATFORM_ID=platform:f39 00:02:42.695 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:42.695 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:42.695 ++ LOGO=fedora-logo-icon 00:02:42.695 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:42.695 ++ HOME_URL=https://fedoraproject.org/ 00:02:42.695 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:42.695 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:42.695 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:42.695 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:42.695 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:42.695 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:42.695 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:42.695 ++ SUPPORT_END=2024-11-12 00:02:42.695 ++ VARIANT='Cloud Edition' 00:02:42.695 ++ VARIANT_ID=cloud 00:02:42.695 + uname -a 00:02:42.695 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:42.695 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:43.266 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:43.266 Hugepages 00:02:43.266 node hugesize free / total 00:02:43.266 node0 1048576kB 0 / 0 00:02:43.266 node0 2048kB 0 / 0 00:02:43.266 00:02:43.266 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:43.266 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:43.266 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:43.266 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:02:43.266 + rm -f /tmp/spdk-ld-path 00:02:43.266 + source autorun-spdk.conf 00:02:43.266 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:43.266 ++ SPDK_RUN_ASAN=1 00:02:43.266 ++ SPDK_RUN_UBSAN=1 00:02:43.266 ++ SPDK_TEST_RAID=1 00:02:43.266 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:43.266 ++ RUN_NIGHTLY=0 00:02:43.266 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:43.266 + [[ -n '' ]] 00:02:43.266 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:43.266 + for M in /var/spdk/build-*-manifest.txt 00:02:43.266 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:43.266 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:43.266 + for M in /var/spdk/build-*-manifest.txt 00:02:43.266 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:43.266 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:43.266 + for M in /var/spdk/build-*-manifest.txt 00:02:43.266 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:43.266 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:43.266 ++ uname 00:02:43.266 + [[ Linux == \L\i\n\u\x ]] 00:02:43.266 + sudo dmesg -T 00:02:43.266 + sudo dmesg --clear 00:02:43.525 + dmesg_pid=5434 00:02:43.525 + [[ Fedora Linux == FreeBSD ]] 00:02:43.525 + sudo dmesg -Tw 00:02:43.525 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:43.525 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:43.525 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:43.525 + [[ -x /usr/src/fio-static/fio ]] 00:02:43.525 + export FIO_BIN=/usr/src/fio-static/fio 00:02:43.525 + FIO_BIN=/usr/src/fio-static/fio 00:02:43.525 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:43.525 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:43.525 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:43.525 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:43.525 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:43.525 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:43.525 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:43.525 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:43.525 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:43.525 11:12:26 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:43.525 11:12:26 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:43.525 11:12:26 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:43.525 11:12:26 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:02:43.525 11:12:26 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:02:43.525 11:12:26 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:02:43.525 11:12:26 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:43.525 11:12:26 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:02:43.525 11:12:26 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:43.525 11:12:26 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:43.525 11:12:26 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:43.525 11:12:26 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:43.525 11:12:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:43.525 11:12:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:43.525 11:12:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:43.525 11:12:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:43.525 11:12:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.525 11:12:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.525 11:12:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.525 11:12:26 -- paths/export.sh@5 -- $ export PATH 00:02:43.525 11:12:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.525 11:12:26 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:43.525 11:12:26 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:43.525 11:12:26 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732101146.XXXXXX 00:02:43.525 11:12:26 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732101146.A1Rvxu 00:02:43.525 11:12:26 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:43.525 11:12:26 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:43.525 11:12:26 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:43.525 11:12:26 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:43.525 11:12:26 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:43.525 11:12:26 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:43.525 11:12:26 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:43.525 11:12:26 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.525 11:12:26 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:02:43.525 11:12:26 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:43.525 11:12:26 -- pm/common@17 -- $ local monitor 00:02:43.525 11:12:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.525 11:12:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.525 11:12:26 -- pm/common@25 -- $ sleep 1 00:02:43.525 11:12:26 -- pm/common@21 -- $ date +%s 00:02:43.525 11:12:26 -- pm/common@21 -- $ date +%s 00:02:43.525 11:12:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732101146 00:02:43.525 11:12:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732101146 00:02:43.785 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732101146_collect-cpu-load.pm.log 00:02:43.785 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732101146_collect-vmstat.pm.log 00:02:44.725 11:12:27 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:44.725 11:12:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:44.725 11:12:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:44.725 11:12:27 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:44.725 11:12:27 -- spdk/autobuild.sh@16 -- $ date -u 00:02:44.725 Wed Nov 20 11:12:27 AM UTC 2024 00:02:44.725 11:12:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:44.725 v25.01-pre-213-g0383e688b 00:02:44.725 11:12:27 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:44.725 11:12:27 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:44.725 11:12:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:44.725 11:12:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:44.725 11:12:27 -- common/autotest_common.sh@10 -- $ set +x 00:02:44.725 ************************************ 00:02:44.725 START TEST asan 00:02:44.725 ************************************ 00:02:44.725 using asan 00:02:44.725 11:12:27 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:44.725 00:02:44.725 real 0m0.001s 00:02:44.725 user 0m0.001s 00:02:44.725 sys 0m0.000s 00:02:44.725 11:12:27 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:44.725 11:12:27 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:44.725 ************************************ 00:02:44.725 END TEST asan 00:02:44.725 ************************************ 00:02:44.725 11:12:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:44.725 11:12:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:44.725 11:12:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:44.725 11:12:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:44.725 11:12:27 -- common/autotest_common.sh@10 -- $ set +x 00:02:44.725 ************************************ 00:02:44.725 START TEST ubsan 00:02:44.725 ************************************ 00:02:44.725 using ubsan 00:02:44.725 11:12:27 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:44.725 00:02:44.725 real 0m0.000s 00:02:44.725 user 0m0.000s 00:02:44.725 sys 0m0.000s 00:02:44.725 11:12:27 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:44.725 11:12:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:44.725 ************************************ 00:02:44.725 END TEST ubsan 00:02:44.725 ************************************ 00:02:44.725 11:12:27 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:44.725 11:12:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:44.725 11:12:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:44.725 11:12:27 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:44.725 11:12:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:44.725 11:12:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:44.725 11:12:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:44.725 11:12:27 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:44.725 11:12:27 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:02:44.985 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:44.985 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:45.554 Using 'verbs' RDMA provider 00:03:01.394 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:16.293 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:16.861 Creating mk/config.mk...done. 00:03:16.861 Creating mk/cc.flags.mk...done. 00:03:16.861 Type 'make' to build. 00:03:16.861 11:12:59 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:16.861 11:12:59 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:16.861 11:12:59 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:16.861 11:12:59 -- common/autotest_common.sh@10 -- $ set +x 00:03:16.861 ************************************ 00:03:16.861 START TEST make 00:03:16.861 ************************************ 00:03:16.862 11:12:59 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:17.436 make[1]: Nothing to be done for 'all'. 00:03:29.648 The Meson build system 00:03:29.648 Version: 1.5.0 00:03:29.648 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:29.648 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:29.648 Build type: native build 00:03:29.648 Program cat found: YES (/usr/bin/cat) 00:03:29.648 Project name: DPDK 00:03:29.648 Project version: 24.03.0 00:03:29.648 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:29.648 C linker for the host machine: cc ld.bfd 2.40-14 00:03:29.648 Host machine cpu family: x86_64 00:03:29.648 Host machine cpu: x86_64 00:03:29.648 Message: ## Building in Developer Mode ## 00:03:29.648 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:29.648 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:29.648 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:29.648 Program python3 found: YES (/usr/bin/python3) 00:03:29.648 Program cat found: YES (/usr/bin/cat) 00:03:29.648 Compiler for C supports arguments -march=native: YES 00:03:29.648 Checking for size of "void *" : 8 00:03:29.648 Checking for size of "void *" : 8 (cached) 00:03:29.648 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:29.648 Library m found: YES 00:03:29.648 Library numa found: YES 00:03:29.648 Has header "numaif.h" : YES 00:03:29.648 Library fdt found: NO 00:03:29.648 Library execinfo found: NO 00:03:29.648 Has header "execinfo.h" : YES 00:03:29.648 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:29.648 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:29.648 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:29.648 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:29.648 Run-time dependency openssl found: YES 3.1.1 00:03:29.648 Run-time dependency libpcap found: YES 1.10.4 00:03:29.648 Has header "pcap.h" with dependency libpcap: YES 00:03:29.648 Compiler for C supports arguments -Wcast-qual: YES 00:03:29.648 Compiler for C supports arguments -Wdeprecated: YES 00:03:29.648 Compiler for C supports arguments -Wformat: YES 00:03:29.648 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:29.648 Compiler for C supports arguments -Wformat-security: NO 00:03:29.648 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:29.648 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:29.648 Compiler for C supports arguments -Wnested-externs: YES 00:03:29.648 Compiler for C supports arguments -Wold-style-definition: YES 00:03:29.648 Compiler for C supports arguments -Wpointer-arith: YES 00:03:29.648 Compiler for C supports arguments -Wsign-compare: YES 00:03:29.648 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:29.648 Compiler for C supports arguments -Wundef: YES 00:03:29.648 Compiler for C supports arguments -Wwrite-strings: YES 00:03:29.648 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:29.648 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:29.648 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:29.648 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:29.648 Program objdump found: YES (/usr/bin/objdump) 00:03:29.648 Compiler for C supports arguments -mavx512f: YES 00:03:29.648 Checking if "AVX512 checking" compiles: YES 00:03:29.648 Fetching value of define "__SSE4_2__" : 1 00:03:29.648 Fetching value of define "__AES__" : 1 00:03:29.648 Fetching value of define "__AVX__" : 1 00:03:29.648 Fetching value of define "__AVX2__" : 1 00:03:29.648 Fetching value of define "__AVX512BW__" : 1 00:03:29.648 Fetching value of define "__AVX512CD__" : 1 00:03:29.648 Fetching value of define "__AVX512DQ__" : 1 00:03:29.648 Fetching value of define "__AVX512F__" : 1 00:03:29.648 Fetching value of define "__AVX512VL__" : 1 00:03:29.648 Fetching value of define "__PCLMUL__" : 1 00:03:29.648 Fetching value of define "__RDRND__" : 1 00:03:29.648 Fetching value of define "__RDSEED__" : 1 00:03:29.648 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:29.648 Fetching value of define "__znver1__" : (undefined) 00:03:29.648 Fetching value of define "__znver2__" : (undefined) 00:03:29.648 Fetching value of define "__znver3__" : (undefined) 00:03:29.648 Fetching value of define "__znver4__" : (undefined) 00:03:29.648 Library asan found: YES 00:03:29.648 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:29.648 Message: lib/log: Defining dependency "log" 00:03:29.648 Message: lib/kvargs: Defining dependency "kvargs" 00:03:29.648 Message: lib/telemetry: Defining dependency "telemetry" 00:03:29.648 Library rt found: YES 00:03:29.648 Checking for function "getentropy" : NO 00:03:29.648 Message: lib/eal: Defining dependency "eal" 00:03:29.648 Message: lib/ring: Defining dependency "ring" 00:03:29.648 Message: lib/rcu: Defining dependency "rcu" 00:03:29.648 Message: lib/mempool: Defining dependency "mempool" 00:03:29.648 Message: lib/mbuf: Defining dependency "mbuf" 00:03:29.648 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:29.648 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:29.648 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:29.648 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:29.648 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:29.648 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:29.648 Compiler for C supports arguments -mpclmul: YES 00:03:29.648 Compiler for C supports arguments -maes: YES 00:03:29.648 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:29.648 Compiler for C supports arguments -mavx512bw: YES 00:03:29.648 Compiler for C supports arguments -mavx512dq: YES 00:03:29.648 Compiler for C supports arguments -mavx512vl: YES 00:03:29.648 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:29.648 Compiler for C supports arguments -mavx2: YES 00:03:29.648 Compiler for C supports arguments -mavx: YES 00:03:29.648 Message: lib/net: Defining dependency "net" 00:03:29.648 Message: lib/meter: Defining dependency "meter" 00:03:29.648 Message: lib/ethdev: Defining dependency "ethdev" 00:03:29.648 Message: lib/pci: Defining dependency "pci" 00:03:29.648 Message: lib/cmdline: Defining dependency "cmdline" 00:03:29.648 Message: lib/hash: Defining dependency "hash" 00:03:29.648 Message: lib/timer: Defining dependency "timer" 00:03:29.648 Message: lib/compressdev: Defining dependency "compressdev" 00:03:29.648 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:29.648 Message: lib/dmadev: Defining dependency "dmadev" 00:03:29.648 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:29.648 Message: lib/power: Defining dependency "power" 00:03:29.648 Message: lib/reorder: Defining dependency "reorder" 00:03:29.648 Message: lib/security: Defining dependency "security" 00:03:29.648 Has header "linux/userfaultfd.h" : YES 00:03:29.648 Has header "linux/vduse.h" : YES 00:03:29.648 Message: lib/vhost: Defining dependency "vhost" 00:03:29.648 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:29.648 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:29.648 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:29.648 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:29.648 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:29.648 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:29.648 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:29.648 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:29.648 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:29.648 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:29.648 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:29.648 Configuring doxy-api-html.conf using configuration 00:03:29.648 Configuring doxy-api-man.conf using configuration 00:03:29.648 Program mandb found: YES (/usr/bin/mandb) 00:03:29.648 Program sphinx-build found: NO 00:03:29.648 Configuring rte_build_config.h using configuration 00:03:29.648 Message: 00:03:29.648 ================= 00:03:29.648 Applications Enabled 00:03:29.648 ================= 00:03:29.648 00:03:29.648 apps: 00:03:29.648 00:03:29.648 00:03:29.648 Message: 00:03:29.648 ================= 00:03:29.648 Libraries Enabled 00:03:29.648 ================= 00:03:29.648 00:03:29.648 libs: 00:03:29.648 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:29.648 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:29.648 cryptodev, dmadev, power, reorder, security, vhost, 00:03:29.648 00:03:29.648 Message: 00:03:29.648 =============== 00:03:29.648 Drivers Enabled 00:03:29.648 =============== 00:03:29.648 00:03:29.648 common: 00:03:29.648 00:03:29.648 bus: 00:03:29.648 pci, vdev, 00:03:29.648 mempool: 00:03:29.648 ring, 00:03:29.648 dma: 00:03:29.648 00:03:29.648 net: 00:03:29.648 00:03:29.648 crypto: 00:03:29.648 00:03:29.648 compress: 00:03:29.648 00:03:29.649 vdpa: 00:03:29.649 00:03:29.649 00:03:29.649 Message: 00:03:29.649 ================= 00:03:29.649 Content Skipped 00:03:29.649 ================= 00:03:29.649 00:03:29.649 apps: 00:03:29.649 dumpcap: explicitly disabled via build config 00:03:29.649 graph: explicitly disabled via build config 00:03:29.649 pdump: explicitly disabled via build config 00:03:29.649 proc-info: explicitly disabled via build config 00:03:29.649 test-acl: explicitly disabled via build config 00:03:29.649 test-bbdev: explicitly disabled via build config 00:03:29.649 test-cmdline: explicitly disabled via build config 00:03:29.649 test-compress-perf: explicitly disabled via build config 00:03:29.649 test-crypto-perf: explicitly disabled via build config 00:03:29.649 test-dma-perf: explicitly disabled via build config 00:03:29.649 test-eventdev: explicitly disabled via build config 00:03:29.649 test-fib: explicitly disabled via build config 00:03:29.649 test-flow-perf: explicitly disabled via build config 00:03:29.649 test-gpudev: explicitly disabled via build config 00:03:29.649 test-mldev: explicitly disabled via build config 00:03:29.649 test-pipeline: explicitly disabled via build config 00:03:29.649 test-pmd: explicitly disabled via build config 00:03:29.649 test-regex: explicitly disabled via build config 00:03:29.649 test-sad: explicitly disabled via build config 00:03:29.649 test-security-perf: explicitly disabled via build config 00:03:29.649 00:03:29.649 libs: 00:03:29.649 argparse: explicitly disabled via build config 00:03:29.649 metrics: explicitly disabled via build config 00:03:29.649 acl: explicitly disabled via build config 00:03:29.649 bbdev: explicitly disabled via build config 00:03:29.649 bitratestats: explicitly disabled via build config 00:03:29.649 bpf: explicitly disabled via build config 00:03:29.649 cfgfile: explicitly disabled via build config 00:03:29.649 distributor: explicitly disabled via build config 00:03:29.649 efd: explicitly disabled via build config 00:03:29.649 eventdev: explicitly disabled via build config 00:03:29.649 dispatcher: explicitly disabled via build config 00:03:29.649 gpudev: explicitly disabled via build config 00:03:29.649 gro: explicitly disabled via build config 00:03:29.649 gso: explicitly disabled via build config 00:03:29.649 ip_frag: explicitly disabled via build config 00:03:29.649 jobstats: explicitly disabled via build config 00:03:29.649 latencystats: explicitly disabled via build config 00:03:29.649 lpm: explicitly disabled via build config 00:03:29.649 member: explicitly disabled via build config 00:03:29.649 pcapng: explicitly disabled via build config 00:03:29.649 rawdev: explicitly disabled via build config 00:03:29.649 regexdev: explicitly disabled via build config 00:03:29.649 mldev: explicitly disabled via build config 00:03:29.649 rib: explicitly disabled via build config 00:03:29.649 sched: explicitly disabled via build config 00:03:29.649 stack: explicitly disabled via build config 00:03:29.649 ipsec: explicitly disabled via build config 00:03:29.649 pdcp: explicitly disabled via build config 00:03:29.649 fib: explicitly disabled via build config 00:03:29.649 port: explicitly disabled via build config 00:03:29.649 pdump: explicitly disabled via build config 00:03:29.649 table: explicitly disabled via build config 00:03:29.649 pipeline: explicitly disabled via build config 00:03:29.649 graph: explicitly disabled via build config 00:03:29.649 node: explicitly disabled via build config 00:03:29.649 00:03:29.649 drivers: 00:03:29.649 common/cpt: not in enabled drivers build config 00:03:29.649 common/dpaax: not in enabled drivers build config 00:03:29.649 common/iavf: not in enabled drivers build config 00:03:29.649 common/idpf: not in enabled drivers build config 00:03:29.649 common/ionic: not in enabled drivers build config 00:03:29.649 common/mvep: not in enabled drivers build config 00:03:29.649 common/octeontx: not in enabled drivers build config 00:03:29.649 bus/auxiliary: not in enabled drivers build config 00:03:29.649 bus/cdx: not in enabled drivers build config 00:03:29.649 bus/dpaa: not in enabled drivers build config 00:03:29.649 bus/fslmc: not in enabled drivers build config 00:03:29.649 bus/ifpga: not in enabled drivers build config 00:03:29.649 bus/platform: not in enabled drivers build config 00:03:29.649 bus/uacce: not in enabled drivers build config 00:03:29.649 bus/vmbus: not in enabled drivers build config 00:03:29.649 common/cnxk: not in enabled drivers build config 00:03:29.649 common/mlx5: not in enabled drivers build config 00:03:29.649 common/nfp: not in enabled drivers build config 00:03:29.649 common/nitrox: not in enabled drivers build config 00:03:29.649 common/qat: not in enabled drivers build config 00:03:29.649 common/sfc_efx: not in enabled drivers build config 00:03:29.649 mempool/bucket: not in enabled drivers build config 00:03:29.649 mempool/cnxk: not in enabled drivers build config 00:03:29.649 mempool/dpaa: not in enabled drivers build config 00:03:29.649 mempool/dpaa2: not in enabled drivers build config 00:03:29.649 mempool/octeontx: not in enabled drivers build config 00:03:29.649 mempool/stack: not in enabled drivers build config 00:03:29.649 dma/cnxk: not in enabled drivers build config 00:03:29.649 dma/dpaa: not in enabled drivers build config 00:03:29.649 dma/dpaa2: not in enabled drivers build config 00:03:29.649 dma/hisilicon: not in enabled drivers build config 00:03:29.649 dma/idxd: not in enabled drivers build config 00:03:29.649 dma/ioat: not in enabled drivers build config 00:03:29.649 dma/skeleton: not in enabled drivers build config 00:03:29.649 net/af_packet: not in enabled drivers build config 00:03:29.649 net/af_xdp: not in enabled drivers build config 00:03:29.649 net/ark: not in enabled drivers build config 00:03:29.649 net/atlantic: not in enabled drivers build config 00:03:29.649 net/avp: not in enabled drivers build config 00:03:29.649 net/axgbe: not in enabled drivers build config 00:03:29.649 net/bnx2x: not in enabled drivers build config 00:03:29.649 net/bnxt: not in enabled drivers build config 00:03:29.649 net/bonding: not in enabled drivers build config 00:03:29.649 net/cnxk: not in enabled drivers build config 00:03:29.649 net/cpfl: not in enabled drivers build config 00:03:29.649 net/cxgbe: not in enabled drivers build config 00:03:29.649 net/dpaa: not in enabled drivers build config 00:03:29.649 net/dpaa2: not in enabled drivers build config 00:03:29.649 net/e1000: not in enabled drivers build config 00:03:29.649 net/ena: not in enabled drivers build config 00:03:29.649 net/enetc: not in enabled drivers build config 00:03:29.649 net/enetfec: not in enabled drivers build config 00:03:29.649 net/enic: not in enabled drivers build config 00:03:29.649 net/failsafe: not in enabled drivers build config 00:03:29.649 net/fm10k: not in enabled drivers build config 00:03:29.649 net/gve: not in enabled drivers build config 00:03:29.649 net/hinic: not in enabled drivers build config 00:03:29.649 net/hns3: not in enabled drivers build config 00:03:29.649 net/i40e: not in enabled drivers build config 00:03:29.649 net/iavf: not in enabled drivers build config 00:03:29.649 net/ice: not in enabled drivers build config 00:03:29.649 net/idpf: not in enabled drivers build config 00:03:29.649 net/igc: not in enabled drivers build config 00:03:29.649 net/ionic: not in enabled drivers build config 00:03:29.649 net/ipn3ke: not in enabled drivers build config 00:03:29.649 net/ixgbe: not in enabled drivers build config 00:03:29.649 net/mana: not in enabled drivers build config 00:03:29.649 net/memif: not in enabled drivers build config 00:03:29.649 net/mlx4: not in enabled drivers build config 00:03:29.649 net/mlx5: not in enabled drivers build config 00:03:29.649 net/mvneta: not in enabled drivers build config 00:03:29.649 net/mvpp2: not in enabled drivers build config 00:03:29.649 net/netvsc: not in enabled drivers build config 00:03:29.649 net/nfb: not in enabled drivers build config 00:03:29.649 net/nfp: not in enabled drivers build config 00:03:29.649 net/ngbe: not in enabled drivers build config 00:03:29.649 net/null: not in enabled drivers build config 00:03:29.649 net/octeontx: not in enabled drivers build config 00:03:29.649 net/octeon_ep: not in enabled drivers build config 00:03:29.649 net/pcap: not in enabled drivers build config 00:03:29.649 net/pfe: not in enabled drivers build config 00:03:29.649 net/qede: not in enabled drivers build config 00:03:29.649 net/ring: not in enabled drivers build config 00:03:29.649 net/sfc: not in enabled drivers build config 00:03:29.649 net/softnic: not in enabled drivers build config 00:03:29.649 net/tap: not in enabled drivers build config 00:03:29.649 net/thunderx: not in enabled drivers build config 00:03:29.649 net/txgbe: not in enabled drivers build config 00:03:29.649 net/vdev_netvsc: not in enabled drivers build config 00:03:29.649 net/vhost: not in enabled drivers build config 00:03:29.649 net/virtio: not in enabled drivers build config 00:03:29.649 net/vmxnet3: not in enabled drivers build config 00:03:29.649 raw/*: missing internal dependency, "rawdev" 00:03:29.649 crypto/armv8: not in enabled drivers build config 00:03:29.649 crypto/bcmfs: not in enabled drivers build config 00:03:29.649 crypto/caam_jr: not in enabled drivers build config 00:03:29.649 crypto/ccp: not in enabled drivers build config 00:03:29.649 crypto/cnxk: not in enabled drivers build config 00:03:29.649 crypto/dpaa_sec: not in enabled drivers build config 00:03:29.649 crypto/dpaa2_sec: not in enabled drivers build config 00:03:29.649 crypto/ipsec_mb: not in enabled drivers build config 00:03:29.649 crypto/mlx5: not in enabled drivers build config 00:03:29.649 crypto/mvsam: not in enabled drivers build config 00:03:29.649 crypto/nitrox: not in enabled drivers build config 00:03:29.649 crypto/null: not in enabled drivers build config 00:03:29.649 crypto/octeontx: not in enabled drivers build config 00:03:29.649 crypto/openssl: not in enabled drivers build config 00:03:29.649 crypto/scheduler: not in enabled drivers build config 00:03:29.649 crypto/uadk: not in enabled drivers build config 00:03:29.649 crypto/virtio: not in enabled drivers build config 00:03:29.649 compress/isal: not in enabled drivers build config 00:03:29.649 compress/mlx5: not in enabled drivers build config 00:03:29.649 compress/nitrox: not in enabled drivers build config 00:03:29.649 compress/octeontx: not in enabled drivers build config 00:03:29.649 compress/zlib: not in enabled drivers build config 00:03:29.649 regex/*: missing internal dependency, "regexdev" 00:03:29.649 ml/*: missing internal dependency, "mldev" 00:03:29.649 vdpa/ifc: not in enabled drivers build config 00:03:29.649 vdpa/mlx5: not in enabled drivers build config 00:03:29.649 vdpa/nfp: not in enabled drivers build config 00:03:29.650 vdpa/sfc: not in enabled drivers build config 00:03:29.650 event/*: missing internal dependency, "eventdev" 00:03:29.650 baseband/*: missing internal dependency, "bbdev" 00:03:29.650 gpu/*: missing internal dependency, "gpudev" 00:03:29.650 00:03:29.650 00:03:29.650 Build targets in project: 85 00:03:29.650 00:03:29.650 DPDK 24.03.0 00:03:29.650 00:03:29.650 User defined options 00:03:29.650 buildtype : debug 00:03:29.650 default_library : shared 00:03:29.650 libdir : lib 00:03:29.650 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:29.650 b_sanitize : address 00:03:29.650 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:29.650 c_link_args : 00:03:29.650 cpu_instruction_set: native 00:03:29.650 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:29.650 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:29.650 enable_docs : false 00:03:29.650 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:29.650 enable_kmods : false 00:03:29.650 max_lcores : 128 00:03:29.650 tests : false 00:03:29.650 00:03:29.650 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:29.650 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:29.650 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:29.650 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:29.650 [3/268] Linking static target lib/librte_log.a 00:03:29.650 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:29.650 [5/268] Linking static target lib/librte_kvargs.a 00:03:29.650 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:29.650 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:29.650 [8/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.650 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:29.650 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:29.650 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:29.650 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:29.650 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:29.650 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:29.650 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:29.650 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:29.650 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:29.650 [18/268] Linking static target lib/librte_telemetry.a 00:03:29.650 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:29.650 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.909 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:29.909 [22/268] Linking target lib/librte_log.so.24.1 00:03:29.909 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:29.909 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:29.909 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:29.909 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:29.909 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:30.169 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:30.169 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:30.169 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:30.169 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:30.169 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.169 [33/268] Linking target lib/librte_kvargs.so.24.1 00:03:30.428 [34/268] Linking target lib/librte_telemetry.so.24.1 00:03:30.428 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:30.428 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:30.428 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:30.428 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:30.428 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:30.688 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:30.688 [41/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:30.688 [42/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:30.688 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:30.688 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:30.688 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:30.688 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:30.948 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:30.948 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:31.208 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:31.208 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:31.208 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:31.208 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:31.468 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:31.468 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:31.468 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:31.468 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:31.468 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:31.727 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:31.727 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:31.727 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:31.727 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:31.727 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:31.727 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:31.727 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:31.985 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:31.985 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:31.985 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:32.244 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:32.244 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:32.503 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:32.503 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:32.503 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:32.503 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:32.503 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:32.503 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:32.503 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:32.503 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:32.503 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:32.763 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:32.763 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:32.763 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:32.763 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:33.022 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:33.022 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:33.022 [85/268] Linking static target lib/librte_eal.a 00:03:33.022 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:33.022 [87/268] Linking static target lib/librte_ring.a 00:03:33.022 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:33.281 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:33.281 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:33.281 [91/268] Linking static target lib/librte_rcu.a 00:03:33.282 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:33.282 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:33.541 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:33.541 [95/268] Linking static target lib/librte_mempool.a 00:03:33.541 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:33.541 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:33.541 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:33.541 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:33.799 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.799 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.799 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:34.058 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:34.058 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:34.318 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:34.318 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:34.318 [107/268] Linking static target lib/librte_net.a 00:03:34.318 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:34.318 [109/268] Linking static target lib/librte_meter.a 00:03:34.318 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:34.318 [111/268] Linking static target lib/librte_mbuf.a 00:03:34.318 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:34.318 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:34.577 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:34.577 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.577 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.577 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.836 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:35.095 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:35.095 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:35.095 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:35.355 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:35.355 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.614 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:35.614 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:35.614 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:35.614 [127/268] Linking static target lib/librte_pci.a 00:03:35.614 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:35.614 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:35.873 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:35.873 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:35.873 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:35.873 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:35.873 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:35.873 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:35.873 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:35.873 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:35.873 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:35.873 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:35.873 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:36.132 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:36.132 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:36.132 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:36.132 [144/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.132 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:36.391 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:36.391 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:36.391 [148/268] Linking static target lib/librte_cmdline.a 00:03:36.662 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:36.662 [150/268] Linking static target lib/librte_timer.a 00:03:36.662 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:36.662 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:36.662 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:36.948 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:36.948 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:37.207 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:37.207 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:37.207 [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.207 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:37.207 [160/268] Linking static target lib/librte_compressdev.a 00:03:37.207 [161/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:37.466 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:37.466 [163/268] Linking static target lib/librte_ethdev.a 00:03:37.466 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:37.466 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:37.466 [166/268] Linking static target lib/librte_hash.a 00:03:37.724 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:37.724 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:37.724 [169/268] Linking static target lib/librte_dmadev.a 00:03:37.724 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:37.982 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:37.982 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:37.982 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.982 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:38.240 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.240 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:38.499 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:38.499 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:38.499 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:38.499 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.499 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:38.758 [182/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.758 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:38.758 [184/268] Linking static target lib/librte_cryptodev.a 00:03:38.758 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:38.759 [186/268] Linking static target lib/librte_power.a 00:03:39.018 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:39.018 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:39.018 [189/268] Linking static target lib/librte_reorder.a 00:03:39.018 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:39.278 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:39.278 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:39.278 [193/268] Linking static target lib/librte_security.a 00:03:39.537 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.796 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.055 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:40.055 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.055 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:40.317 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:40.317 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:40.574 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:40.575 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:40.575 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:40.832 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:40.832 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:41.089 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:41.089 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:41.089 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:41.089 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:41.089 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:41.089 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.348 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:41.348 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:41.348 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:41.348 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:41.606 [216/268] Linking static target drivers/librte_bus_vdev.a 00:03:41.606 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:41.606 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:41.606 [219/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:41.606 [220/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:41.606 [221/268] Linking static target drivers/librte_bus_pci.a 00:03:41.606 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:41.864 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:41.864 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:41.865 [225/268] Linking static target drivers/librte_mempool_ring.a 00:03:41.865 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.121 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.685 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:44.586 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.586 [230/268] Linking target lib/librte_eal.so.24.1 00:03:44.845 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:44.845 [232/268] Linking target lib/librte_meter.so.24.1 00:03:44.845 [233/268] Linking target lib/librte_ring.so.24.1 00:03:44.845 [234/268] Linking target lib/librte_pci.so.24.1 00:03:44.845 [235/268] Linking target lib/librte_dmadev.so.24.1 00:03:44.845 [236/268] Linking target lib/librte_timer.so.24.1 00:03:44.845 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:44.845 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:44.845 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:44.845 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:44.845 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:44.845 [242/268] Linking target lib/librte_mempool.so.24.1 00:03:45.104 [243/268] Linking target lib/librte_rcu.so.24.1 00:03:45.104 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:45.104 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:45.104 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:45.104 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:45.104 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:45.104 [249/268] Linking target lib/librte_mbuf.so.24.1 00:03:45.362 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:45.362 [251/268] Linking target lib/librte_compressdev.so.24.1 00:03:45.362 [252/268] Linking target lib/librte_net.so.24.1 00:03:45.362 [253/268] Linking target lib/librte_reorder.so.24.1 00:03:45.362 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:03:45.620 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:45.620 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:45.620 [257/268] Linking target lib/librte_hash.so.24.1 00:03:45.620 [258/268] Linking target lib/librte_cmdline.so.24.1 00:03:45.620 [259/268] Linking target lib/librte_security.so.24.1 00:03:45.620 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:46.569 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.828 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:46.828 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:47.086 [264/268] Linking target lib/librte_power.so.24.1 00:03:47.086 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:47.086 [266/268] Linking static target lib/librte_vhost.a 00:03:49.625 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.625 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:49.625 INFO: autodetecting backend as ninja 00:03:49.625 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:07.725 CC lib/ut/ut.o 00:04:07.725 CC lib/log/log_flags.o 00:04:07.725 CC lib/log/log.o 00:04:07.725 CC lib/log/log_deprecated.o 00:04:07.725 CC lib/ut_mock/mock.o 00:04:07.725 LIB libspdk_ut.a 00:04:07.725 LIB libspdk_log.a 00:04:07.725 SO libspdk_ut.so.2.0 00:04:07.725 LIB libspdk_ut_mock.a 00:04:07.725 SO libspdk_log.so.7.1 00:04:07.725 SYMLINK libspdk_ut.so 00:04:07.725 SO libspdk_ut_mock.so.6.0 00:04:07.725 SYMLINK libspdk_log.so 00:04:07.725 SYMLINK libspdk_ut_mock.so 00:04:07.725 CC lib/ioat/ioat.o 00:04:07.725 CC lib/util/base64.o 00:04:07.725 CC lib/util/cpuset.o 00:04:07.725 CC lib/util/bit_array.o 00:04:07.725 CC lib/util/crc16.o 00:04:07.725 CC lib/util/crc32c.o 00:04:07.725 CC lib/util/crc32.o 00:04:07.725 CXX lib/trace_parser/trace.o 00:04:07.725 CC lib/dma/dma.o 00:04:07.725 CC lib/vfio_user/host/vfio_user_pci.o 00:04:07.725 CC lib/util/crc32_ieee.o 00:04:07.725 CC lib/vfio_user/host/vfio_user.o 00:04:07.725 CC lib/util/crc64.o 00:04:07.725 CC lib/util/dif.o 00:04:07.725 CC lib/util/fd.o 00:04:07.725 CC lib/util/fd_group.o 00:04:07.725 LIB libspdk_dma.a 00:04:07.725 SO libspdk_dma.so.5.0 00:04:07.725 CC lib/util/file.o 00:04:07.725 CC lib/util/hexlify.o 00:04:07.725 LIB libspdk_ioat.a 00:04:07.725 SO libspdk_ioat.so.7.0 00:04:07.725 SYMLINK libspdk_dma.so 00:04:07.725 CC lib/util/iov.o 00:04:07.725 CC lib/util/math.o 00:04:07.725 CC lib/util/net.o 00:04:07.725 SYMLINK libspdk_ioat.so 00:04:07.725 CC lib/util/pipe.o 00:04:07.725 LIB libspdk_vfio_user.a 00:04:07.725 SO libspdk_vfio_user.so.5.0 00:04:07.725 CC lib/util/strerror_tls.o 00:04:07.725 CC lib/util/string.o 00:04:07.725 SYMLINK libspdk_vfio_user.so 00:04:07.725 CC lib/util/uuid.o 00:04:07.725 CC lib/util/xor.o 00:04:07.725 CC lib/util/zipf.o 00:04:07.725 CC lib/util/md5.o 00:04:08.294 LIB libspdk_util.a 00:04:08.294 SO libspdk_util.so.10.1 00:04:08.294 LIB libspdk_trace_parser.a 00:04:08.294 SO libspdk_trace_parser.so.6.0 00:04:08.294 SYMLINK libspdk_util.so 00:04:08.553 SYMLINK libspdk_trace_parser.so 00:04:08.553 CC lib/env_dpdk/env.o 00:04:08.553 CC lib/env_dpdk/memory.o 00:04:08.553 CC lib/env_dpdk/pci.o 00:04:08.553 CC lib/env_dpdk/init.o 00:04:08.553 CC lib/env_dpdk/threads.o 00:04:08.553 CC lib/conf/conf.o 00:04:08.553 CC lib/json/json_parse.o 00:04:08.553 CC lib/vmd/vmd.o 00:04:08.553 CC lib/idxd/idxd.o 00:04:08.553 CC lib/rdma_utils/rdma_utils.o 00:04:08.813 CC lib/env_dpdk/pci_ioat.o 00:04:08.813 LIB libspdk_conf.a 00:04:08.813 CC lib/json/json_util.o 00:04:08.813 SO libspdk_conf.so.6.0 00:04:08.813 CC lib/json/json_write.o 00:04:08.813 LIB libspdk_rdma_utils.a 00:04:08.813 SYMLINK libspdk_conf.so 00:04:08.813 CC lib/vmd/led.o 00:04:08.813 SO libspdk_rdma_utils.so.1.0 00:04:09.073 CC lib/env_dpdk/pci_virtio.o 00:04:09.073 SYMLINK libspdk_rdma_utils.so 00:04:09.073 CC lib/env_dpdk/pci_vmd.o 00:04:09.073 CC lib/env_dpdk/pci_idxd.o 00:04:09.073 CC lib/env_dpdk/pci_event.o 00:04:09.073 CC lib/idxd/idxd_user.o 00:04:09.073 CC lib/idxd/idxd_kernel.o 00:04:09.073 CC lib/env_dpdk/sigbus_handler.o 00:04:09.073 LIB libspdk_json.a 00:04:09.332 SO libspdk_json.so.6.0 00:04:09.332 CC lib/rdma_provider/common.o 00:04:09.332 CC lib/env_dpdk/pci_dpdk.o 00:04:09.332 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:09.332 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:09.332 SYMLINK libspdk_json.so 00:04:09.332 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:09.332 LIB libspdk_idxd.a 00:04:09.332 LIB libspdk_vmd.a 00:04:09.332 SO libspdk_idxd.so.12.1 00:04:09.332 SO libspdk_vmd.so.6.0 00:04:09.332 SYMLINK libspdk_idxd.so 00:04:09.592 SYMLINK libspdk_vmd.so 00:04:09.592 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:09.592 CC lib/jsonrpc/jsonrpc_server.o 00:04:09.592 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:09.592 CC lib/jsonrpc/jsonrpc_client.o 00:04:09.592 LIB libspdk_rdma_provider.a 00:04:09.592 SO libspdk_rdma_provider.so.7.0 00:04:09.592 SYMLINK libspdk_rdma_provider.so 00:04:09.853 LIB libspdk_jsonrpc.a 00:04:09.853 SO libspdk_jsonrpc.so.6.0 00:04:09.853 SYMLINK libspdk_jsonrpc.so 00:04:10.113 LIB libspdk_env_dpdk.a 00:04:10.372 CC lib/rpc/rpc.o 00:04:10.372 SO libspdk_env_dpdk.so.15.1 00:04:10.372 LIB libspdk_rpc.a 00:04:10.372 SYMLINK libspdk_env_dpdk.so 00:04:10.632 SO libspdk_rpc.so.6.0 00:04:10.632 SYMLINK libspdk_rpc.so 00:04:10.891 CC lib/keyring/keyring.o 00:04:10.891 CC lib/notify/notify.o 00:04:10.891 CC lib/keyring/keyring_rpc.o 00:04:10.891 CC lib/notify/notify_rpc.o 00:04:10.891 CC lib/trace/trace.o 00:04:10.891 CC lib/trace/trace_flags.o 00:04:10.891 CC lib/trace/trace_rpc.o 00:04:11.150 LIB libspdk_notify.a 00:04:11.150 SO libspdk_notify.so.6.0 00:04:11.150 SYMLINK libspdk_notify.so 00:04:11.150 LIB libspdk_keyring.a 00:04:11.150 LIB libspdk_trace.a 00:04:11.150 SO libspdk_keyring.so.2.0 00:04:11.410 SO libspdk_trace.so.11.0 00:04:11.410 SYMLINK libspdk_keyring.so 00:04:11.410 SYMLINK libspdk_trace.so 00:04:11.670 CC lib/sock/sock_rpc.o 00:04:11.670 CC lib/sock/sock.o 00:04:11.670 CC lib/thread/thread.o 00:04:11.670 CC lib/thread/iobuf.o 00:04:12.240 LIB libspdk_sock.a 00:04:12.240 SO libspdk_sock.so.10.0 00:04:12.499 SYMLINK libspdk_sock.so 00:04:12.759 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:12.759 CC lib/nvme/nvme_ctrlr.o 00:04:12.759 CC lib/nvme/nvme_fabric.o 00:04:12.759 CC lib/nvme/nvme_ns_cmd.o 00:04:12.759 CC lib/nvme/nvme_ns.o 00:04:12.759 CC lib/nvme/nvme_pcie_common.o 00:04:12.759 CC lib/nvme/nvme_pcie.o 00:04:12.759 CC lib/nvme/nvme.o 00:04:12.759 CC lib/nvme/nvme_qpair.o 00:04:13.328 LIB libspdk_thread.a 00:04:13.587 SO libspdk_thread.so.11.0 00:04:13.587 CC lib/nvme/nvme_quirks.o 00:04:13.587 CC lib/nvme/nvme_transport.o 00:04:13.587 CC lib/nvme/nvme_discovery.o 00:04:13.587 SYMLINK libspdk_thread.so 00:04:13.587 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:13.587 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:13.587 CC lib/nvme/nvme_tcp.o 00:04:13.847 CC lib/nvme/nvme_opal.o 00:04:13.847 CC lib/nvme/nvme_io_msg.o 00:04:13.847 CC lib/nvme/nvme_poll_group.o 00:04:14.107 CC lib/nvme/nvme_zns.o 00:04:14.107 CC lib/accel/accel.o 00:04:14.107 CC lib/blob/blobstore.o 00:04:14.107 CC lib/blob/request.o 00:04:14.366 CC lib/blob/zeroes.o 00:04:14.366 CC lib/nvme/nvme_stubs.o 00:04:14.366 CC lib/nvme/nvme_auth.o 00:04:14.366 CC lib/blob/blob_bs_dev.o 00:04:14.626 CC lib/nvme/nvme_cuse.o 00:04:14.626 CC lib/nvme/nvme_rdma.o 00:04:14.884 CC lib/accel/accel_rpc.o 00:04:14.884 CC lib/init/json_config.o 00:04:14.884 CC lib/init/subsystem.o 00:04:15.143 CC lib/virtio/virtio.o 00:04:15.143 CC lib/init/subsystem_rpc.o 00:04:15.143 CC lib/init/rpc.o 00:04:15.402 CC lib/virtio/virtio_vhost_user.o 00:04:15.402 CC lib/accel/accel_sw.o 00:04:15.402 LIB libspdk_init.a 00:04:15.402 CC lib/virtio/virtio_vfio_user.o 00:04:15.402 SO libspdk_init.so.6.0 00:04:15.402 CC lib/fsdev/fsdev.o 00:04:15.402 CC lib/fsdev/fsdev_io.o 00:04:15.402 SYMLINK libspdk_init.so 00:04:15.402 CC lib/fsdev/fsdev_rpc.o 00:04:15.402 CC lib/virtio/virtio_pci.o 00:04:15.661 CC lib/event/app.o 00:04:15.661 CC lib/event/reactor.o 00:04:15.661 CC lib/event/log_rpc.o 00:04:15.661 LIB libspdk_accel.a 00:04:15.661 CC lib/event/app_rpc.o 00:04:15.661 SO libspdk_accel.so.16.0 00:04:15.661 SYMLINK libspdk_accel.so 00:04:15.661 CC lib/event/scheduler_static.o 00:04:15.921 LIB libspdk_virtio.a 00:04:15.921 SO libspdk_virtio.so.7.0 00:04:15.921 SYMLINK libspdk_virtio.so 00:04:15.921 CC lib/bdev/bdev.o 00:04:15.921 CC lib/bdev/bdev_zone.o 00:04:15.921 CC lib/bdev/scsi_nvme.o 00:04:15.921 CC lib/bdev/part.o 00:04:15.921 CC lib/bdev/bdev_rpc.o 00:04:16.180 LIB libspdk_event.a 00:04:16.180 LIB libspdk_fsdev.a 00:04:16.180 SO libspdk_fsdev.so.2.0 00:04:16.180 SO libspdk_event.so.14.0 00:04:16.180 LIB libspdk_nvme.a 00:04:16.180 SYMLINK libspdk_fsdev.so 00:04:16.180 SYMLINK libspdk_event.so 00:04:16.439 SO libspdk_nvme.so.15.0 00:04:16.698 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:16.698 SYMLINK libspdk_nvme.so 00:04:17.266 LIB libspdk_fuse_dispatcher.a 00:04:17.266 SO libspdk_fuse_dispatcher.so.1.0 00:04:17.526 SYMLINK libspdk_fuse_dispatcher.so 00:04:17.785 LIB libspdk_blob.a 00:04:18.044 SO libspdk_blob.so.11.0 00:04:18.044 SYMLINK libspdk_blob.so 00:04:18.304 CC lib/blobfs/blobfs.o 00:04:18.304 CC lib/blobfs/tree.o 00:04:18.562 CC lib/lvol/lvol.o 00:04:18.821 LIB libspdk_bdev.a 00:04:18.821 SO libspdk_bdev.so.17.0 00:04:19.080 SYMLINK libspdk_bdev.so 00:04:19.339 CC lib/nvmf/ctrlr_discovery.o 00:04:19.339 CC lib/nvmf/ctrlr.o 00:04:19.339 CC lib/nvmf/subsystem.o 00:04:19.339 CC lib/nvmf/ctrlr_bdev.o 00:04:19.339 CC lib/nbd/nbd.o 00:04:19.339 CC lib/ublk/ublk.o 00:04:19.339 CC lib/scsi/dev.o 00:04:19.339 CC lib/ftl/ftl_core.o 00:04:19.339 LIB libspdk_blobfs.a 00:04:19.339 SO libspdk_blobfs.so.10.0 00:04:19.598 CC lib/scsi/lun.o 00:04:19.598 SYMLINK libspdk_blobfs.so 00:04:19.598 CC lib/ublk/ublk_rpc.o 00:04:19.598 LIB libspdk_lvol.a 00:04:19.598 SO libspdk_lvol.so.10.0 00:04:19.598 CC lib/ftl/ftl_init.o 00:04:19.598 SYMLINK libspdk_lvol.so 00:04:19.598 CC lib/nbd/nbd_rpc.o 00:04:19.598 CC lib/nvmf/nvmf.o 00:04:19.598 CC lib/nvmf/nvmf_rpc.o 00:04:19.857 CC lib/nvmf/transport.o 00:04:19.857 CC lib/scsi/port.o 00:04:19.857 CC lib/ftl/ftl_layout.o 00:04:19.857 LIB libspdk_nbd.a 00:04:19.857 SO libspdk_nbd.so.7.0 00:04:19.857 CC lib/scsi/scsi.o 00:04:19.857 LIB libspdk_ublk.a 00:04:19.857 CC lib/nvmf/tcp.o 00:04:20.117 SYMLINK libspdk_nbd.so 00:04:20.117 CC lib/nvmf/stubs.o 00:04:20.117 SO libspdk_ublk.so.3.0 00:04:20.117 CC lib/scsi/scsi_bdev.o 00:04:20.117 SYMLINK libspdk_ublk.so 00:04:20.117 CC lib/scsi/scsi_pr.o 00:04:20.117 CC lib/ftl/ftl_debug.o 00:04:20.376 CC lib/ftl/ftl_io.o 00:04:20.376 CC lib/ftl/ftl_sb.o 00:04:20.376 CC lib/nvmf/mdns_server.o 00:04:20.376 CC lib/ftl/ftl_l2p.o 00:04:20.635 CC lib/scsi/scsi_rpc.o 00:04:20.635 CC lib/nvmf/rdma.o 00:04:20.635 CC lib/nvmf/auth.o 00:04:20.635 CC lib/ftl/ftl_l2p_flat.o 00:04:20.635 CC lib/ftl/ftl_nv_cache.o 00:04:20.635 CC lib/ftl/ftl_band.o 00:04:20.636 CC lib/scsi/task.o 00:04:20.636 CC lib/ftl/ftl_band_ops.o 00:04:20.896 CC lib/ftl/ftl_writer.o 00:04:20.896 CC lib/ftl/ftl_rq.o 00:04:20.896 LIB libspdk_scsi.a 00:04:20.896 SO libspdk_scsi.so.9.0 00:04:21.155 CC lib/ftl/ftl_reloc.o 00:04:21.155 SYMLINK libspdk_scsi.so 00:04:21.155 CC lib/ftl/ftl_l2p_cache.o 00:04:21.155 CC lib/ftl/ftl_p2l.o 00:04:21.155 CC lib/ftl/ftl_p2l_log.o 00:04:21.155 CC lib/iscsi/conn.o 00:04:21.426 CC lib/vhost/vhost.o 00:04:21.426 CC lib/iscsi/init_grp.o 00:04:21.426 CC lib/ftl/mngt/ftl_mngt.o 00:04:21.426 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:21.426 CC lib/vhost/vhost_rpc.o 00:04:21.698 CC lib/iscsi/iscsi.o 00:04:21.698 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:21.698 CC lib/iscsi/param.o 00:04:21.698 CC lib/iscsi/portal_grp.o 00:04:21.698 CC lib/vhost/vhost_scsi.o 00:04:21.698 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:21.959 CC lib/vhost/vhost_blk.o 00:04:21.959 CC lib/iscsi/tgt_node.o 00:04:21.959 CC lib/iscsi/iscsi_subsystem.o 00:04:21.959 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:21.959 CC lib/iscsi/iscsi_rpc.o 00:04:22.219 CC lib/vhost/rte_vhost_user.o 00:04:22.219 CC lib/iscsi/task.o 00:04:22.219 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:22.219 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:22.219 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:22.479 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:22.479 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:22.479 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:22.479 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:22.479 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:22.479 CC lib/ftl/utils/ftl_conf.o 00:04:22.738 CC lib/ftl/utils/ftl_md.o 00:04:22.738 CC lib/ftl/utils/ftl_mempool.o 00:04:22.738 CC lib/ftl/utils/ftl_bitmap.o 00:04:22.738 CC lib/ftl/utils/ftl_property.o 00:04:22.738 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:22.738 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:22.738 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:22.738 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:22.997 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:22.997 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:22.997 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:22.997 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:22.997 LIB libspdk_vhost.a 00:04:22.997 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:22.997 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:22.997 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:22.997 LIB libspdk_iscsi.a 00:04:23.255 SO libspdk_vhost.so.8.0 00:04:23.255 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:23.255 LIB libspdk_nvmf.a 00:04:23.255 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:23.255 CC lib/ftl/base/ftl_base_dev.o 00:04:23.255 SO libspdk_iscsi.so.8.0 00:04:23.255 CC lib/ftl/base/ftl_base_bdev.o 00:04:23.255 SYMLINK libspdk_vhost.so 00:04:23.255 CC lib/ftl/ftl_trace.o 00:04:23.255 SO libspdk_nvmf.so.20.0 00:04:23.514 SYMLINK libspdk_iscsi.so 00:04:23.514 LIB libspdk_ftl.a 00:04:23.514 SYMLINK libspdk_nvmf.so 00:04:23.773 SO libspdk_ftl.so.9.0 00:04:24.032 SYMLINK libspdk_ftl.so 00:04:24.602 CC module/env_dpdk/env_dpdk_rpc.o 00:04:24.602 CC module/sock/posix/posix.o 00:04:24.602 CC module/fsdev/aio/fsdev_aio.o 00:04:24.602 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:24.602 CC module/scheduler/gscheduler/gscheduler.o 00:04:24.602 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:24.602 CC module/accel/error/accel_error.o 00:04:24.602 CC module/accel/ioat/accel_ioat.o 00:04:24.602 CC module/keyring/file/keyring.o 00:04:24.602 CC module/blob/bdev/blob_bdev.o 00:04:24.602 LIB libspdk_env_dpdk_rpc.a 00:04:24.602 SO libspdk_env_dpdk_rpc.so.6.0 00:04:24.602 LIB libspdk_scheduler_gscheduler.a 00:04:24.602 CC module/keyring/file/keyring_rpc.o 00:04:24.602 SO libspdk_scheduler_gscheduler.so.4.0 00:04:24.602 SYMLINK libspdk_env_dpdk_rpc.so 00:04:24.602 LIB libspdk_scheduler_dpdk_governor.a 00:04:24.602 CC module/accel/ioat/accel_ioat_rpc.o 00:04:24.602 LIB libspdk_scheduler_dynamic.a 00:04:24.862 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:24.862 SO libspdk_scheduler_dynamic.so.4.0 00:04:24.862 SYMLINK libspdk_scheduler_gscheduler.so 00:04:24.862 CC module/accel/error/accel_error_rpc.o 00:04:24.862 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:24.862 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:24.862 CC module/fsdev/aio/linux_aio_mgr.o 00:04:24.862 SYMLINK libspdk_scheduler_dynamic.so 00:04:24.862 LIB libspdk_keyring_file.a 00:04:24.862 LIB libspdk_blob_bdev.a 00:04:24.862 SO libspdk_keyring_file.so.2.0 00:04:24.862 SO libspdk_blob_bdev.so.11.0 00:04:24.862 LIB libspdk_accel_ioat.a 00:04:24.862 CC module/keyring/linux/keyring.o 00:04:24.862 LIB libspdk_accel_error.a 00:04:24.862 SO libspdk_accel_ioat.so.6.0 00:04:24.862 SYMLINK libspdk_keyring_file.so 00:04:24.862 SO libspdk_accel_error.so.2.0 00:04:24.862 SYMLINK libspdk_blob_bdev.so 00:04:24.862 SYMLINK libspdk_accel_ioat.so 00:04:24.862 CC module/keyring/linux/keyring_rpc.o 00:04:25.121 SYMLINK libspdk_accel_error.so 00:04:25.121 CC module/accel/dsa/accel_dsa.o 00:04:25.121 CC module/accel/dsa/accel_dsa_rpc.o 00:04:25.121 LIB libspdk_keyring_linux.a 00:04:25.121 CC module/accel/iaa/accel_iaa.o 00:04:25.121 SO libspdk_keyring_linux.so.1.0 00:04:25.121 CC module/accel/iaa/accel_iaa_rpc.o 00:04:25.121 SYMLINK libspdk_keyring_linux.so 00:04:25.121 CC module/bdev/error/vbdev_error.o 00:04:25.121 CC module/bdev/delay/vbdev_delay.o 00:04:25.121 CC module/blobfs/bdev/blobfs_bdev.o 00:04:25.121 CC module/bdev/gpt/gpt.o 00:04:25.380 LIB libspdk_fsdev_aio.a 00:04:25.380 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:25.380 LIB libspdk_accel_dsa.a 00:04:25.380 SO libspdk_fsdev_aio.so.1.0 00:04:25.380 SO libspdk_accel_dsa.so.5.0 00:04:25.380 LIB libspdk_accel_iaa.a 00:04:25.380 CC module/bdev/lvol/vbdev_lvol.o 00:04:25.380 SO libspdk_accel_iaa.so.3.0 00:04:25.380 SYMLINK libspdk_accel_dsa.so 00:04:25.380 SYMLINK libspdk_fsdev_aio.so 00:04:25.380 CC module/bdev/gpt/vbdev_gpt.o 00:04:25.380 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:25.380 LIB libspdk_sock_posix.a 00:04:25.380 SYMLINK libspdk_accel_iaa.so 00:04:25.380 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:25.380 SO libspdk_sock_posix.so.6.0 00:04:25.380 LIB libspdk_blobfs_bdev.a 00:04:25.380 SO libspdk_blobfs_bdev.so.6.0 00:04:25.380 CC module/bdev/error/vbdev_error_rpc.o 00:04:25.639 SYMLINK libspdk_sock_posix.so 00:04:25.639 SYMLINK libspdk_blobfs_bdev.so 00:04:25.639 CC module/bdev/malloc/bdev_malloc.o 00:04:25.639 LIB libspdk_bdev_delay.a 00:04:25.639 LIB libspdk_bdev_error.a 00:04:25.639 SO libspdk_bdev_delay.so.6.0 00:04:25.639 CC module/bdev/null/bdev_null.o 00:04:25.639 SO libspdk_bdev_error.so.6.0 00:04:25.639 CC module/bdev/nvme/bdev_nvme.o 00:04:25.639 CC module/bdev/passthru/vbdev_passthru.o 00:04:25.639 SYMLINK libspdk_bdev_delay.so 00:04:25.639 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:25.639 LIB libspdk_bdev_gpt.a 00:04:25.639 SYMLINK libspdk_bdev_error.so 00:04:25.639 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:25.639 CC module/bdev/raid/bdev_raid.o 00:04:25.639 SO libspdk_bdev_gpt.so.6.0 00:04:25.897 SYMLINK libspdk_bdev_gpt.so 00:04:25.897 CC module/bdev/nvme/nvme_rpc.o 00:04:25.897 CC module/bdev/nvme/bdev_mdns_client.o 00:04:25.897 LIB libspdk_bdev_lvol.a 00:04:25.897 CC module/bdev/null/bdev_null_rpc.o 00:04:25.897 CC module/bdev/split/vbdev_split.o 00:04:25.897 SO libspdk_bdev_lvol.so.6.0 00:04:25.897 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:25.897 LIB libspdk_bdev_passthru.a 00:04:25.897 CC module/bdev/split/vbdev_split_rpc.o 00:04:25.897 SYMLINK libspdk_bdev_lvol.so 00:04:25.897 SO libspdk_bdev_passthru.so.6.0 00:04:26.155 SYMLINK libspdk_bdev_passthru.so 00:04:26.155 LIB libspdk_bdev_null.a 00:04:26.155 LIB libspdk_bdev_malloc.a 00:04:26.155 SO libspdk_bdev_null.so.6.0 00:04:26.155 SO libspdk_bdev_malloc.so.6.0 00:04:26.155 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:26.155 SYMLINK libspdk_bdev_null.so 00:04:26.155 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:26.155 CC module/bdev/nvme/vbdev_opal.o 00:04:26.155 SYMLINK libspdk_bdev_malloc.so 00:04:26.155 LIB libspdk_bdev_split.a 00:04:26.155 CC module/bdev/aio/bdev_aio.o 00:04:26.155 CC module/bdev/ftl/bdev_ftl.o 00:04:26.155 SO libspdk_bdev_split.so.6.0 00:04:26.413 CC module/bdev/iscsi/bdev_iscsi.o 00:04:26.413 SYMLINK libspdk_bdev_split.so 00:04:26.413 CC module/bdev/aio/bdev_aio_rpc.o 00:04:26.413 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:26.413 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:26.413 LIB libspdk_bdev_zone_block.a 00:04:26.673 SO libspdk_bdev_zone_block.so.6.0 00:04:26.673 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:26.673 CC module/bdev/raid/bdev_raid_rpc.o 00:04:26.673 LIB libspdk_bdev_aio.a 00:04:26.673 CC module/bdev/raid/bdev_raid_sb.o 00:04:26.673 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:26.673 SYMLINK libspdk_bdev_zone_block.so 00:04:26.673 CC module/bdev/raid/raid0.o 00:04:26.673 SO libspdk_bdev_aio.so.6.0 00:04:26.673 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:26.673 SYMLINK libspdk_bdev_aio.so 00:04:26.673 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:26.673 LIB libspdk_bdev_iscsi.a 00:04:26.673 SO libspdk_bdev_iscsi.so.6.0 00:04:26.673 LIB libspdk_bdev_ftl.a 00:04:26.932 SO libspdk_bdev_ftl.so.6.0 00:04:26.932 SYMLINK libspdk_bdev_iscsi.so 00:04:26.932 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:26.932 CC module/bdev/raid/raid1.o 00:04:26.932 CC module/bdev/raid/concat.o 00:04:26.932 SYMLINK libspdk_bdev_ftl.so 00:04:26.932 CC module/bdev/raid/raid5f.o 00:04:27.190 LIB libspdk_bdev_virtio.a 00:04:27.190 SO libspdk_bdev_virtio.so.6.0 00:04:27.448 SYMLINK libspdk_bdev_virtio.so 00:04:27.448 LIB libspdk_bdev_raid.a 00:04:27.448 SO libspdk_bdev_raid.so.6.0 00:04:27.708 SYMLINK libspdk_bdev_raid.so 00:04:28.643 LIB libspdk_bdev_nvme.a 00:04:28.643 SO libspdk_bdev_nvme.so.7.1 00:04:28.901 SYMLINK libspdk_bdev_nvme.so 00:04:29.469 CC module/event/subsystems/scheduler/scheduler.o 00:04:29.469 CC module/event/subsystems/iobuf/iobuf.o 00:04:29.469 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:29.469 CC module/event/subsystems/sock/sock.o 00:04:29.469 CC module/event/subsystems/vmd/vmd.o 00:04:29.469 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:29.469 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:29.469 CC module/event/subsystems/keyring/keyring.o 00:04:29.469 CC module/event/subsystems/fsdev/fsdev.o 00:04:29.469 LIB libspdk_event_keyring.a 00:04:29.469 LIB libspdk_event_scheduler.a 00:04:29.469 LIB libspdk_event_sock.a 00:04:29.469 LIB libspdk_event_vhost_blk.a 00:04:29.469 LIB libspdk_event_fsdev.a 00:04:29.469 SO libspdk_event_scheduler.so.4.0 00:04:29.469 SO libspdk_event_keyring.so.1.0 00:04:29.469 LIB libspdk_event_vmd.a 00:04:29.469 SO libspdk_event_vhost_blk.so.3.0 00:04:29.469 SO libspdk_event_sock.so.5.0 00:04:29.469 SO libspdk_event_fsdev.so.1.0 00:04:29.469 SO libspdk_event_vmd.so.6.0 00:04:29.469 SYMLINK libspdk_event_scheduler.so 00:04:29.469 SYMLINK libspdk_event_keyring.so 00:04:29.469 LIB libspdk_event_iobuf.a 00:04:29.469 SYMLINK libspdk_event_vhost_blk.so 00:04:29.469 SYMLINK libspdk_event_fsdev.so 00:04:29.469 SYMLINK libspdk_event_sock.so 00:04:29.727 SO libspdk_event_iobuf.so.3.0 00:04:29.727 SYMLINK libspdk_event_vmd.so 00:04:29.727 SYMLINK libspdk_event_iobuf.so 00:04:29.984 CC module/event/subsystems/accel/accel.o 00:04:30.243 LIB libspdk_event_accel.a 00:04:30.243 SO libspdk_event_accel.so.6.0 00:04:30.243 SYMLINK libspdk_event_accel.so 00:04:30.810 CC module/event/subsystems/bdev/bdev.o 00:04:30.810 LIB libspdk_event_bdev.a 00:04:31.069 SO libspdk_event_bdev.so.6.0 00:04:31.069 SYMLINK libspdk_event_bdev.so 00:04:31.328 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:31.328 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:31.328 CC module/event/subsystems/scsi/scsi.o 00:04:31.328 CC module/event/subsystems/nbd/nbd.o 00:04:31.328 CC module/event/subsystems/ublk/ublk.o 00:04:31.588 LIB libspdk_event_nbd.a 00:04:31.588 LIB libspdk_event_scsi.a 00:04:31.588 LIB libspdk_event_ublk.a 00:04:31.588 SO libspdk_event_nbd.so.6.0 00:04:31.588 SO libspdk_event_scsi.so.6.0 00:04:31.588 LIB libspdk_event_nvmf.a 00:04:31.588 SO libspdk_event_ublk.so.3.0 00:04:31.588 SYMLINK libspdk_event_nbd.so 00:04:31.588 SYMLINK libspdk_event_scsi.so 00:04:31.588 SO libspdk_event_nvmf.so.6.0 00:04:31.588 SYMLINK libspdk_event_ublk.so 00:04:31.588 SYMLINK libspdk_event_nvmf.so 00:04:32.158 CC module/event/subsystems/iscsi/iscsi.o 00:04:32.158 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:32.158 LIB libspdk_event_iscsi.a 00:04:32.158 LIB libspdk_event_vhost_scsi.a 00:04:32.158 SO libspdk_event_iscsi.so.6.0 00:04:32.158 SO libspdk_event_vhost_scsi.so.3.0 00:04:32.158 SYMLINK libspdk_event_iscsi.so 00:04:32.158 SYMLINK libspdk_event_vhost_scsi.so 00:04:32.418 SO libspdk.so.6.0 00:04:32.418 SYMLINK libspdk.so 00:04:32.677 CC test/rpc_client/rpc_client_test.o 00:04:32.677 TEST_HEADER include/spdk/accel.h 00:04:32.677 CC app/trace_record/trace_record.o 00:04:32.677 TEST_HEADER include/spdk/accel_module.h 00:04:32.677 TEST_HEADER include/spdk/assert.h 00:04:32.953 CXX app/trace/trace.o 00:04:32.953 TEST_HEADER include/spdk/barrier.h 00:04:32.953 TEST_HEADER include/spdk/base64.h 00:04:32.953 TEST_HEADER include/spdk/bdev.h 00:04:32.953 TEST_HEADER include/spdk/bdev_module.h 00:04:32.953 TEST_HEADER include/spdk/bdev_zone.h 00:04:32.953 TEST_HEADER include/spdk/bit_array.h 00:04:32.953 TEST_HEADER include/spdk/bit_pool.h 00:04:32.953 TEST_HEADER include/spdk/blob_bdev.h 00:04:32.953 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:32.953 TEST_HEADER include/spdk/blobfs.h 00:04:32.953 TEST_HEADER include/spdk/blob.h 00:04:32.953 TEST_HEADER include/spdk/conf.h 00:04:32.953 TEST_HEADER include/spdk/config.h 00:04:32.953 TEST_HEADER include/spdk/cpuset.h 00:04:32.953 TEST_HEADER include/spdk/crc16.h 00:04:32.953 TEST_HEADER include/spdk/crc32.h 00:04:32.953 TEST_HEADER include/spdk/crc64.h 00:04:32.953 TEST_HEADER include/spdk/dif.h 00:04:32.953 TEST_HEADER include/spdk/dma.h 00:04:32.953 TEST_HEADER include/spdk/endian.h 00:04:32.953 TEST_HEADER include/spdk/env_dpdk.h 00:04:32.953 TEST_HEADER include/spdk/env.h 00:04:32.953 TEST_HEADER include/spdk/event.h 00:04:32.953 TEST_HEADER include/spdk/fd_group.h 00:04:32.953 TEST_HEADER include/spdk/fd.h 00:04:32.953 TEST_HEADER include/spdk/file.h 00:04:32.953 TEST_HEADER include/spdk/fsdev.h 00:04:32.953 CC examples/util/zipf/zipf.o 00:04:32.953 TEST_HEADER include/spdk/fsdev_module.h 00:04:32.953 TEST_HEADER include/spdk/ftl.h 00:04:32.953 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:32.953 TEST_HEADER include/spdk/gpt_spec.h 00:04:32.953 CC test/thread/poller_perf/poller_perf.o 00:04:32.953 CC examples/ioat/perf/perf.o 00:04:32.953 TEST_HEADER include/spdk/hexlify.h 00:04:32.953 TEST_HEADER include/spdk/histogram_data.h 00:04:32.953 CC test/dma/test_dma/test_dma.o 00:04:32.953 TEST_HEADER include/spdk/idxd.h 00:04:32.953 TEST_HEADER include/spdk/idxd_spec.h 00:04:32.953 TEST_HEADER include/spdk/init.h 00:04:32.953 TEST_HEADER include/spdk/ioat.h 00:04:32.953 TEST_HEADER include/spdk/ioat_spec.h 00:04:32.953 CC test/app/bdev_svc/bdev_svc.o 00:04:32.953 TEST_HEADER include/spdk/iscsi_spec.h 00:04:32.953 TEST_HEADER include/spdk/json.h 00:04:32.953 TEST_HEADER include/spdk/jsonrpc.h 00:04:32.953 TEST_HEADER include/spdk/keyring.h 00:04:32.953 TEST_HEADER include/spdk/keyring_module.h 00:04:32.953 TEST_HEADER include/spdk/likely.h 00:04:32.953 TEST_HEADER include/spdk/log.h 00:04:32.953 TEST_HEADER include/spdk/lvol.h 00:04:32.953 TEST_HEADER include/spdk/md5.h 00:04:32.953 TEST_HEADER include/spdk/memory.h 00:04:32.953 TEST_HEADER include/spdk/mmio.h 00:04:32.953 TEST_HEADER include/spdk/nbd.h 00:04:32.953 TEST_HEADER include/spdk/net.h 00:04:32.953 TEST_HEADER include/spdk/notify.h 00:04:32.953 TEST_HEADER include/spdk/nvme.h 00:04:32.953 TEST_HEADER include/spdk/nvme_intel.h 00:04:32.953 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:32.953 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:32.953 TEST_HEADER include/spdk/nvme_spec.h 00:04:32.953 TEST_HEADER include/spdk/nvme_zns.h 00:04:32.953 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:32.953 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:32.953 TEST_HEADER include/spdk/nvmf.h 00:04:32.953 TEST_HEADER include/spdk/nvmf_spec.h 00:04:32.953 TEST_HEADER include/spdk/nvmf_transport.h 00:04:32.953 TEST_HEADER include/spdk/opal.h 00:04:32.953 LINK rpc_client_test 00:04:32.953 TEST_HEADER include/spdk/opal_spec.h 00:04:32.953 TEST_HEADER include/spdk/pci_ids.h 00:04:32.953 TEST_HEADER include/spdk/pipe.h 00:04:32.953 TEST_HEADER include/spdk/queue.h 00:04:32.953 TEST_HEADER include/spdk/reduce.h 00:04:32.953 TEST_HEADER include/spdk/rpc.h 00:04:32.953 TEST_HEADER include/spdk/scheduler.h 00:04:32.953 CC test/env/mem_callbacks/mem_callbacks.o 00:04:32.954 TEST_HEADER include/spdk/scsi.h 00:04:32.954 TEST_HEADER include/spdk/scsi_spec.h 00:04:32.954 TEST_HEADER include/spdk/sock.h 00:04:32.954 TEST_HEADER include/spdk/stdinc.h 00:04:32.954 TEST_HEADER include/spdk/string.h 00:04:32.954 TEST_HEADER include/spdk/thread.h 00:04:32.954 TEST_HEADER include/spdk/trace.h 00:04:32.954 TEST_HEADER include/spdk/trace_parser.h 00:04:32.954 TEST_HEADER include/spdk/tree.h 00:04:32.954 TEST_HEADER include/spdk/ublk.h 00:04:32.954 TEST_HEADER include/spdk/util.h 00:04:32.954 TEST_HEADER include/spdk/uuid.h 00:04:33.240 TEST_HEADER include/spdk/version.h 00:04:33.240 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:33.240 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:33.240 TEST_HEADER include/spdk/vhost.h 00:04:33.240 TEST_HEADER include/spdk/vmd.h 00:04:33.240 LINK zipf 00:04:33.240 TEST_HEADER include/spdk/xor.h 00:04:33.240 TEST_HEADER include/spdk/zipf.h 00:04:33.240 CXX test/cpp_headers/accel.o 00:04:33.240 LINK poller_perf 00:04:33.240 LINK spdk_trace_record 00:04:33.240 LINK bdev_svc 00:04:33.240 LINK ioat_perf 00:04:33.240 LINK spdk_trace 00:04:33.240 CXX test/cpp_headers/accel_module.o 00:04:33.240 CC app/nvmf_tgt/nvmf_main.o 00:04:33.240 CXX test/cpp_headers/assert.o 00:04:33.501 CXX test/cpp_headers/barrier.o 00:04:33.501 CC app/iscsi_tgt/iscsi_tgt.o 00:04:33.501 CC examples/ioat/verify/verify.o 00:04:33.501 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:33.501 LINK test_dma 00:04:33.501 LINK nvmf_tgt 00:04:33.501 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:33.501 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:33.501 CXX test/cpp_headers/base64.o 00:04:33.501 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:33.501 LINK mem_callbacks 00:04:33.501 LINK iscsi_tgt 00:04:33.760 LINK verify 00:04:33.760 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:33.760 CXX test/cpp_headers/bdev.o 00:04:33.760 LINK interrupt_tgt 00:04:33.760 CC test/env/vtophys/vtophys.o 00:04:33.760 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:34.019 LINK nvme_fuzz 00:04:34.019 CC examples/thread/thread/thread_ex.o 00:04:34.019 CC examples/sock/hello_world/hello_sock.o 00:04:34.019 CXX test/cpp_headers/bdev_module.o 00:04:34.019 LINK vtophys 00:04:34.019 CC app/spdk_tgt/spdk_tgt.o 00:04:34.019 CC test/env/memory/memory_ut.o 00:04:34.019 LINK env_dpdk_post_init 00:04:34.019 CXX test/cpp_headers/bdev_zone.o 00:04:34.019 LINK vhost_fuzz 00:04:34.279 CC test/env/pci/pci_ut.o 00:04:34.279 LINK hello_sock 00:04:34.279 LINK spdk_tgt 00:04:34.279 LINK thread 00:04:34.279 CXX test/cpp_headers/bit_array.o 00:04:34.279 CC test/event/event_perf/event_perf.o 00:04:34.279 CC test/event/reactor/reactor.o 00:04:34.279 CC test/event/reactor_perf/reactor_perf.o 00:04:34.537 CC test/event/app_repeat/app_repeat.o 00:04:34.537 CXX test/cpp_headers/bit_pool.o 00:04:34.537 LINK reactor 00:04:34.537 LINK event_perf 00:04:34.537 CC app/spdk_lspci/spdk_lspci.o 00:04:34.537 LINK reactor_perf 00:04:34.537 CC examples/vmd/lsvmd/lsvmd.o 00:04:34.537 LINK app_repeat 00:04:34.537 LINK pci_ut 00:04:34.537 CXX test/cpp_headers/blob_bdev.o 00:04:34.537 LINK spdk_lspci 00:04:34.796 CC app/spdk_nvme_perf/perf.o 00:04:34.796 LINK lsvmd 00:04:34.796 CC test/event/scheduler/scheduler.o 00:04:34.796 CC test/app/histogram_perf/histogram_perf.o 00:04:34.796 CXX test/cpp_headers/blobfs_bdev.o 00:04:34.796 CC app/spdk_nvme_identify/identify.o 00:04:35.056 LINK histogram_perf 00:04:35.056 CC examples/idxd/perf/perf.o 00:04:35.056 CXX test/cpp_headers/blobfs.o 00:04:35.056 LINK scheduler 00:04:35.056 CC examples/vmd/led/led.o 00:04:35.056 CXX test/cpp_headers/blob.o 00:04:35.317 LINK led 00:04:35.317 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:35.317 CC test/app/jsoncat/jsoncat.o 00:04:35.317 LINK memory_ut 00:04:35.317 LINK idxd_perf 00:04:35.317 CXX test/cpp_headers/conf.o 00:04:35.317 LINK jsoncat 00:04:35.317 CXX test/cpp_headers/config.o 00:04:35.317 CC test/nvme/aer/aer.o 00:04:35.576 CC test/app/stub/stub.o 00:04:35.576 CXX test/cpp_headers/cpuset.o 00:04:35.576 LINK hello_fsdev 00:04:35.576 CXX test/cpp_headers/crc16.o 00:04:35.576 LINK iscsi_fuzz 00:04:35.576 CC examples/accel/perf/accel_perf.o 00:04:35.835 LINK stub 00:04:35.835 LINK aer 00:04:35.835 CC examples/blob/hello_world/hello_blob.o 00:04:35.835 CXX test/cpp_headers/crc32.o 00:04:35.835 LINK spdk_nvme_perf 00:04:35.835 CXX test/cpp_headers/crc64.o 00:04:35.835 CXX test/cpp_headers/dif.o 00:04:35.835 CC examples/blob/cli/blobcli.o 00:04:35.835 LINK spdk_nvme_identify 00:04:36.095 CC test/nvme/reset/reset.o 00:04:36.095 CXX test/cpp_headers/dma.o 00:04:36.095 LINK hello_blob 00:04:36.095 CC app/spdk_nvme_discover/discovery_aer.o 00:04:36.095 CC app/spdk_top/spdk_top.o 00:04:36.095 CC examples/nvme/hello_world/hello_world.o 00:04:36.095 CXX test/cpp_headers/endian.o 00:04:36.095 CC test/accel/dif/dif.o 00:04:36.355 LINK spdk_nvme_discover 00:04:36.355 LINK reset 00:04:36.355 LINK accel_perf 00:04:36.355 CXX test/cpp_headers/env_dpdk.o 00:04:36.355 CC app/vhost/vhost.o 00:04:36.355 LINK hello_world 00:04:36.355 CC examples/nvme/reconnect/reconnect.o 00:04:36.355 LINK blobcli 00:04:36.355 CXX test/cpp_headers/env.o 00:04:36.355 CXX test/cpp_headers/event.o 00:04:36.355 CXX test/cpp_headers/fd_group.o 00:04:36.614 LINK vhost 00:04:36.614 CXX test/cpp_headers/fd.o 00:04:36.614 CC test/nvme/sgl/sgl.o 00:04:36.614 CXX test/cpp_headers/file.o 00:04:36.614 CC app/spdk_dd/spdk_dd.o 00:04:36.614 CC test/nvme/e2edp/nvme_dp.o 00:04:36.614 LINK reconnect 00:04:36.873 CC test/nvme/overhead/overhead.o 00:04:36.873 CC app/fio/nvme/fio_plugin.o 00:04:36.873 CXX test/cpp_headers/fsdev.o 00:04:36.873 CC app/fio/bdev/fio_plugin.o 00:04:36.873 LINK sgl 00:04:36.873 LINK dif 00:04:36.873 CXX test/cpp_headers/fsdev_module.o 00:04:37.132 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:37.132 LINK nvme_dp 00:04:37.132 LINK overhead 00:04:37.132 CXX test/cpp_headers/ftl.o 00:04:37.132 LINK spdk_top 00:04:37.132 LINK spdk_dd 00:04:37.132 CXX test/cpp_headers/fuse_dispatcher.o 00:04:37.391 CXX test/cpp_headers/gpt_spec.o 00:04:37.391 CC test/nvme/err_injection/err_injection.o 00:04:37.391 CC test/blobfs/mkfs/mkfs.o 00:04:37.391 CC examples/nvme/arbitration/arbitration.o 00:04:37.391 LINK spdk_nvme 00:04:37.391 LINK spdk_bdev 00:04:37.391 CC test/lvol/esnap/esnap.o 00:04:37.391 CXX test/cpp_headers/hexlify.o 00:04:37.391 CC examples/bdev/hello_world/hello_bdev.o 00:04:37.391 LINK err_injection 00:04:37.391 CC examples/nvme/hotplug/hotplug.o 00:04:37.651 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:37.651 LINK nvme_manage 00:04:37.651 LINK mkfs 00:04:37.651 CXX test/cpp_headers/histogram_data.o 00:04:37.651 CC examples/bdev/bdevperf/bdevperf.o 00:04:37.651 LINK hello_bdev 00:04:37.651 LINK cmb_copy 00:04:37.651 CC test/nvme/startup/startup.o 00:04:37.651 CXX test/cpp_headers/idxd.o 00:04:37.651 LINK arbitration 00:04:37.651 LINK hotplug 00:04:37.911 CXX test/cpp_headers/idxd_spec.o 00:04:37.911 CXX test/cpp_headers/init.o 00:04:37.911 CC test/bdev/bdevio/bdevio.o 00:04:37.911 LINK startup 00:04:37.911 CC test/nvme/reserve/reserve.o 00:04:37.911 CC examples/nvme/abort/abort.o 00:04:37.911 CC test/nvme/simple_copy/simple_copy.o 00:04:37.911 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:38.171 CC test/nvme/connect_stress/connect_stress.o 00:04:38.171 CXX test/cpp_headers/ioat.o 00:04:38.171 LINK pmr_persistence 00:04:38.171 CC test/nvme/boot_partition/boot_partition.o 00:04:38.171 LINK reserve 00:04:38.171 LINK connect_stress 00:04:38.171 CXX test/cpp_headers/ioat_spec.o 00:04:38.171 LINK simple_copy 00:04:38.431 LINK bdevio 00:04:38.431 LINK boot_partition 00:04:38.431 CXX test/cpp_headers/iscsi_spec.o 00:04:38.431 LINK abort 00:04:38.431 CC test/nvme/compliance/nvme_compliance.o 00:04:38.431 CC test/nvme/fused_ordering/fused_ordering.o 00:04:38.431 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:38.431 LINK bdevperf 00:04:38.431 CC test/nvme/fdp/fdp.o 00:04:38.691 CXX test/cpp_headers/json.o 00:04:38.691 CXX test/cpp_headers/jsonrpc.o 00:04:38.691 CXX test/cpp_headers/keyring.o 00:04:38.691 CC test/nvme/cuse/cuse.o 00:04:38.691 LINK doorbell_aers 00:04:38.691 LINK fused_ordering 00:04:38.691 CXX test/cpp_headers/keyring_module.o 00:04:38.691 CXX test/cpp_headers/likely.o 00:04:38.691 CXX test/cpp_headers/log.o 00:04:38.950 LINK nvme_compliance 00:04:38.950 CXX test/cpp_headers/lvol.o 00:04:38.950 CXX test/cpp_headers/md5.o 00:04:38.950 CXX test/cpp_headers/memory.o 00:04:38.950 LINK fdp 00:04:38.950 CXX test/cpp_headers/mmio.o 00:04:38.950 CC examples/nvmf/nvmf/nvmf.o 00:04:38.950 CXX test/cpp_headers/nbd.o 00:04:38.950 CXX test/cpp_headers/net.o 00:04:38.950 CXX test/cpp_headers/notify.o 00:04:38.950 CXX test/cpp_headers/nvme.o 00:04:38.950 CXX test/cpp_headers/nvme_intel.o 00:04:39.211 CXX test/cpp_headers/nvme_ocssd.o 00:04:39.211 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:39.211 CXX test/cpp_headers/nvme_spec.o 00:04:39.211 CXX test/cpp_headers/nvme_zns.o 00:04:39.211 CXX test/cpp_headers/nvmf_cmd.o 00:04:39.211 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:39.211 CXX test/cpp_headers/nvmf.o 00:04:39.211 CXX test/cpp_headers/nvmf_spec.o 00:04:39.211 LINK nvmf 00:04:39.211 CXX test/cpp_headers/nvmf_transport.o 00:04:39.211 CXX test/cpp_headers/opal.o 00:04:39.470 CXX test/cpp_headers/opal_spec.o 00:04:39.470 CXX test/cpp_headers/pci_ids.o 00:04:39.470 CXX test/cpp_headers/pipe.o 00:04:39.470 CXX test/cpp_headers/queue.o 00:04:39.470 CXX test/cpp_headers/reduce.o 00:04:39.470 CXX test/cpp_headers/rpc.o 00:04:39.470 CXX test/cpp_headers/scheduler.o 00:04:39.470 CXX test/cpp_headers/scsi.o 00:04:39.470 CXX test/cpp_headers/scsi_spec.o 00:04:39.470 CXX test/cpp_headers/sock.o 00:04:39.470 CXX test/cpp_headers/stdinc.o 00:04:39.470 CXX test/cpp_headers/string.o 00:04:39.732 CXX test/cpp_headers/thread.o 00:04:39.732 CXX test/cpp_headers/trace.o 00:04:39.732 CXX test/cpp_headers/trace_parser.o 00:04:39.732 CXX test/cpp_headers/tree.o 00:04:39.732 CXX test/cpp_headers/ublk.o 00:04:39.732 CXX test/cpp_headers/util.o 00:04:39.732 CXX test/cpp_headers/uuid.o 00:04:39.732 CXX test/cpp_headers/version.o 00:04:39.732 CXX test/cpp_headers/vfio_user_pci.o 00:04:39.732 CXX test/cpp_headers/vfio_user_spec.o 00:04:39.732 CXX test/cpp_headers/vhost.o 00:04:39.732 CXX test/cpp_headers/vmd.o 00:04:39.732 CXX test/cpp_headers/xor.o 00:04:39.732 CXX test/cpp_headers/zipf.o 00:04:39.991 LINK cuse 00:04:44.191 LINK esnap 00:04:44.191 00:04:44.191 real 1m26.904s 00:04:44.191 user 7m42.146s 00:04:44.191 sys 1m40.180s 00:04:44.191 11:14:26 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:44.191 11:14:26 make -- common/autotest_common.sh@10 -- $ set +x 00:04:44.191 ************************************ 00:04:44.191 END TEST make 00:04:44.191 ************************************ 00:04:44.191 11:14:26 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:44.191 11:14:26 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:44.191 11:14:26 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:44.191 11:14:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:44.191 11:14:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:44.191 11:14:26 -- pm/common@44 -- $ pid=5476 00:04:44.191 11:14:26 -- pm/common@50 -- $ kill -TERM 5476 00:04:44.191 11:14:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:44.191 11:14:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:44.191 11:14:26 -- pm/common@44 -- $ pid=5477 00:04:44.191 11:14:26 -- pm/common@50 -- $ kill -TERM 5477 00:04:44.191 11:14:26 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:44.191 11:14:26 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:44.191 11:14:27 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:44.191 11:14:27 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:44.191 11:14:27 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:44.191 11:14:27 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:44.191 11:14:27 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.191 11:14:27 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.191 11:14:27 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.191 11:14:27 -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.191 11:14:27 -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.191 11:14:27 -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.191 11:14:27 -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.191 11:14:27 -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.191 11:14:27 -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.191 11:14:27 -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.191 11:14:27 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.191 11:14:27 -- scripts/common.sh@344 -- # case "$op" in 00:04:44.191 11:14:27 -- scripts/common.sh@345 -- # : 1 00:04:44.191 11:14:27 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.191 11:14:27 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.191 11:14:27 -- scripts/common.sh@365 -- # decimal 1 00:04:44.191 11:14:27 -- scripts/common.sh@353 -- # local d=1 00:04:44.191 11:14:27 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.191 11:14:27 -- scripts/common.sh@355 -- # echo 1 00:04:44.191 11:14:27 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.191 11:14:27 -- scripts/common.sh@366 -- # decimal 2 00:04:44.191 11:14:27 -- scripts/common.sh@353 -- # local d=2 00:04:44.191 11:14:27 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.191 11:14:27 -- scripts/common.sh@355 -- # echo 2 00:04:44.191 11:14:27 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.191 11:14:27 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.191 11:14:27 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.191 11:14:27 -- scripts/common.sh@368 -- # return 0 00:04:44.191 11:14:27 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.191 11:14:27 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:44.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.191 --rc genhtml_branch_coverage=1 00:04:44.191 --rc genhtml_function_coverage=1 00:04:44.191 --rc genhtml_legend=1 00:04:44.191 --rc geninfo_all_blocks=1 00:04:44.191 --rc geninfo_unexecuted_blocks=1 00:04:44.191 00:04:44.191 ' 00:04:44.191 11:14:27 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:44.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.191 --rc genhtml_branch_coverage=1 00:04:44.191 --rc genhtml_function_coverage=1 00:04:44.191 --rc genhtml_legend=1 00:04:44.191 --rc geninfo_all_blocks=1 00:04:44.191 --rc geninfo_unexecuted_blocks=1 00:04:44.191 00:04:44.191 ' 00:04:44.191 11:14:27 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:44.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.191 --rc genhtml_branch_coverage=1 00:04:44.191 --rc genhtml_function_coverage=1 00:04:44.191 --rc genhtml_legend=1 00:04:44.191 --rc geninfo_all_blocks=1 00:04:44.191 --rc geninfo_unexecuted_blocks=1 00:04:44.191 00:04:44.191 ' 00:04:44.191 11:14:27 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:44.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.191 --rc genhtml_branch_coverage=1 00:04:44.191 --rc genhtml_function_coverage=1 00:04:44.191 --rc genhtml_legend=1 00:04:44.191 --rc geninfo_all_blocks=1 00:04:44.191 --rc geninfo_unexecuted_blocks=1 00:04:44.191 00:04:44.191 ' 00:04:44.191 11:14:27 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:44.191 11:14:27 -- nvmf/common.sh@7 -- # uname -s 00:04:44.191 11:14:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:44.191 11:14:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:44.191 11:14:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:44.191 11:14:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:44.191 11:14:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:44.191 11:14:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:44.191 11:14:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:44.191 11:14:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:44.191 11:14:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:44.191 11:14:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:44.191 11:14:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1b4f2fe2-87f8-4e8d-9e38-efdfeac62c69 00:04:44.191 11:14:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=1b4f2fe2-87f8-4e8d-9e38-efdfeac62c69 00:04:44.191 11:14:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:44.191 11:14:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:44.191 11:14:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:44.191 11:14:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:44.191 11:14:27 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:44.191 11:14:27 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:44.191 11:14:27 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:44.191 11:14:27 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:44.191 11:14:27 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:44.191 11:14:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.191 11:14:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.191 11:14:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.191 11:14:27 -- paths/export.sh@5 -- # export PATH 00:04:44.192 11:14:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.192 11:14:27 -- nvmf/common.sh@51 -- # : 0 00:04:44.192 11:14:27 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:44.192 11:14:27 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:44.192 11:14:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:44.192 11:14:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:44.192 11:14:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:44.192 11:14:27 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:44.192 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:44.192 11:14:27 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:44.192 11:14:27 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:44.192 11:14:27 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:44.192 11:14:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:44.192 11:14:27 -- spdk/autotest.sh@32 -- # uname -s 00:04:44.192 11:14:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:44.192 11:14:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:44.192 11:14:27 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:44.192 11:14:27 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:44.192 11:14:27 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:44.192 11:14:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:44.192 11:14:27 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:44.192 11:14:27 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:44.192 11:14:27 -- spdk/autotest.sh@48 -- # udevadm_pid=54460 00:04:44.192 11:14:27 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:44.192 11:14:27 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:44.192 11:14:27 -- pm/common@17 -- # local monitor 00:04:44.192 11:14:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:44.192 11:14:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:44.192 11:14:27 -- pm/common@21 -- # date +%s 00:04:44.192 11:14:27 -- pm/common@25 -- # sleep 1 00:04:44.192 11:14:27 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732101267 00:04:44.192 11:14:27 -- pm/common@21 -- # date +%s 00:04:44.192 11:14:27 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732101267 00:04:44.192 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732101267_collect-cpu-load.pm.log 00:04:44.192 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732101267_collect-vmstat.pm.log 00:04:45.132 11:14:28 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:45.132 11:14:28 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:45.394 11:14:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:45.394 11:14:28 -- common/autotest_common.sh@10 -- # set +x 00:04:45.394 11:14:28 -- spdk/autotest.sh@59 -- # create_test_list 00:04:45.394 11:14:28 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:45.394 11:14:28 -- common/autotest_common.sh@10 -- # set +x 00:04:45.394 11:14:28 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:45.394 11:14:28 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:45.394 11:14:28 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:45.394 11:14:28 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:45.394 11:14:28 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:45.394 11:14:28 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:45.394 11:14:28 -- common/autotest_common.sh@1457 -- # uname 00:04:45.394 11:14:28 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:45.394 11:14:28 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:45.394 11:14:28 -- common/autotest_common.sh@1477 -- # uname 00:04:45.394 11:14:28 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:45.394 11:14:28 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:45.394 11:14:28 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:45.394 lcov: LCOV version 1.15 00:04:45.394 11:14:28 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:00.291 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:00.291 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:15.188 11:14:57 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:15.188 11:14:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.188 11:14:57 -- common/autotest_common.sh@10 -- # set +x 00:05:15.188 11:14:57 -- spdk/autotest.sh@78 -- # rm -f 00:05:15.188 11:14:57 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:15.758 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:15.758 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:15.758 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:15.758 11:14:58 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:15.758 11:14:58 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:15.758 11:14:58 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:15.758 11:14:58 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:15.758 11:14:58 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:15.758 11:14:58 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:15.758 11:14:58 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:15.758 11:14:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:15.758 11:14:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:15.758 11:14:58 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:15.758 11:14:58 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:05:15.758 11:14:58 -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:05:15.758 11:14:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:15.758 11:14:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:15.758 11:14:58 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:15.758 11:14:58 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:05:15.758 11:14:58 -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:05:15.758 11:14:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:15.758 11:14:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:15.758 11:14:58 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:15.758 11:14:58 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:15.758 11:14:58 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:15.758 11:14:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:15.758 11:14:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:15.758 11:14:58 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:15.758 11:14:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:15.758 11:14:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:15.758 11:14:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:15.758 11:14:58 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:15.758 11:14:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:15.758 No valid GPT data, bailing 00:05:15.758 11:14:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:15.758 11:14:58 -- scripts/common.sh@394 -- # pt= 00:05:15.758 11:14:58 -- scripts/common.sh@395 -- # return 1 00:05:15.758 11:14:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:15.758 1+0 records in 00:05:15.758 1+0 records out 00:05:15.758 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00646848 s, 162 MB/s 00:05:15.758 11:14:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:15.758 11:14:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:15.758 11:14:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n2 00:05:15.758 11:14:58 -- scripts/common.sh@381 -- # local block=/dev/nvme0n2 pt 00:05:15.758 11:14:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:05:16.018 No valid GPT data, bailing 00:05:16.018 11:14:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:05:16.018 11:14:58 -- scripts/common.sh@394 -- # pt= 00:05:16.018 11:14:58 -- scripts/common.sh@395 -- # return 1 00:05:16.018 11:14:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:05:16.018 1+0 records in 00:05:16.018 1+0 records out 00:05:16.018 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00675437 s, 155 MB/s 00:05:16.018 11:14:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:16.018 11:14:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:16.018 11:14:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n3 00:05:16.018 11:14:58 -- scripts/common.sh@381 -- # local block=/dev/nvme0n3 pt 00:05:16.018 11:14:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:05:16.018 No valid GPT data, bailing 00:05:16.018 11:14:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:05:16.018 11:14:59 -- scripts/common.sh@394 -- # pt= 00:05:16.018 11:14:59 -- scripts/common.sh@395 -- # return 1 00:05:16.018 11:14:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:05:16.018 1+0 records in 00:05:16.018 1+0 records out 00:05:16.018 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0054027 s, 194 MB/s 00:05:16.018 11:14:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:16.018 11:14:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:16.018 11:14:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:16.018 11:14:59 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:16.018 11:14:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:16.018 No valid GPT data, bailing 00:05:16.018 11:14:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:16.018 11:14:59 -- scripts/common.sh@394 -- # pt= 00:05:16.018 11:14:59 -- scripts/common.sh@395 -- # return 1 00:05:16.018 11:14:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:16.018 1+0 records in 00:05:16.018 1+0 records out 00:05:16.018 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00664586 s, 158 MB/s 00:05:16.018 11:14:59 -- spdk/autotest.sh@105 -- # sync 00:05:16.276 11:14:59 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:16.276 11:14:59 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:16.276 11:14:59 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:19.601 11:15:02 -- spdk/autotest.sh@111 -- # uname -s 00:05:19.601 11:15:02 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:19.601 11:15:02 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:19.601 11:15:02 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:19.861 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:19.861 Hugepages 00:05:19.861 node hugesize free / total 00:05:19.861 node0 1048576kB 0 / 0 00:05:19.861 node0 2048kB 0 / 0 00:05:19.861 00:05:19.861 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:20.120 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:20.120 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:20.379 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:20.379 11:15:03 -- spdk/autotest.sh@117 -- # uname -s 00:05:20.379 11:15:03 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:20.379 11:15:03 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:20.379 11:15:03 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:21.317 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.317 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.317 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.317 11:15:04 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:22.256 11:15:05 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:22.256 11:15:05 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:22.256 11:15:05 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:22.256 11:15:05 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:22.256 11:15:05 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:22.256 11:15:05 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:22.256 11:15:05 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:22.256 11:15:05 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:22.256 11:15:05 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:22.515 11:15:05 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:22.516 11:15:05 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:22.516 11:15:05 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:23.086 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.086 Waiting for block devices as requested 00:05:23.086 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:23.086 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:23.346 11:15:06 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:23.346 11:15:06 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:23.346 11:15:06 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:23.346 11:15:06 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:23.346 11:15:06 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:23.346 11:15:06 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:23.346 11:15:06 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:23.346 11:15:06 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:23.346 11:15:06 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:23.346 11:15:06 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:23.346 11:15:06 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:23.346 11:15:06 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:23.346 11:15:06 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:23.346 11:15:06 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:23.346 11:15:06 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:23.346 11:15:06 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:23.346 11:15:06 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:23.346 11:15:06 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:23.346 11:15:06 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:23.346 11:15:06 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:23.346 11:15:06 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:23.346 11:15:06 -- common/autotest_common.sh@1543 -- # continue 00:05:23.346 11:15:06 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:23.346 11:15:06 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:23.346 11:15:06 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:23.346 11:15:06 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:23.346 11:15:06 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:23.346 11:15:06 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:23.346 11:15:06 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:23.346 11:15:06 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:23.346 11:15:06 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:23.346 11:15:06 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:23.346 11:15:06 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:23.346 11:15:06 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:23.346 11:15:06 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:23.346 11:15:06 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:23.346 11:15:06 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:23.346 11:15:06 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:23.346 11:15:06 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:23.346 11:15:06 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:23.346 11:15:06 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:23.346 11:15:06 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:23.346 11:15:06 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:23.346 11:15:06 -- common/autotest_common.sh@1543 -- # continue 00:05:23.346 11:15:06 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:23.346 11:15:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:23.346 11:15:06 -- common/autotest_common.sh@10 -- # set +x 00:05:23.346 11:15:06 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:23.346 11:15:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.346 11:15:06 -- common/autotest_common.sh@10 -- # set +x 00:05:23.346 11:15:06 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:24.285 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.285 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.285 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.544 11:15:07 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:24.544 11:15:07 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:24.544 11:15:07 -- common/autotest_common.sh@10 -- # set +x 00:05:24.544 11:15:07 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:24.544 11:15:07 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:24.544 11:15:07 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:24.544 11:15:07 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:24.544 11:15:07 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:24.544 11:15:07 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:24.544 11:15:07 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:24.544 11:15:07 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:24.544 11:15:07 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:24.544 11:15:07 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:24.544 11:15:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:24.544 11:15:07 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:24.544 11:15:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:24.544 11:15:07 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:24.544 11:15:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:24.544 11:15:07 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:24.544 11:15:07 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:24.544 11:15:07 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:24.544 11:15:07 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:24.544 11:15:07 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:24.544 11:15:07 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:24.544 11:15:07 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:24.544 11:15:07 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:24.544 11:15:07 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:24.544 11:15:07 -- common/autotest_common.sh@1572 -- # return 0 00:05:24.544 11:15:07 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:24.544 11:15:07 -- common/autotest_common.sh@1580 -- # return 0 00:05:24.544 11:15:07 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:24.544 11:15:07 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:24.544 11:15:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:24.544 11:15:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:24.544 11:15:07 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:24.544 11:15:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.544 11:15:07 -- common/autotest_common.sh@10 -- # set +x 00:05:24.544 11:15:07 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:24.544 11:15:07 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:24.544 11:15:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.544 11:15:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.544 11:15:07 -- common/autotest_common.sh@10 -- # set +x 00:05:24.544 ************************************ 00:05:24.544 START TEST env 00:05:24.544 ************************************ 00:05:24.544 11:15:07 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:24.803 * Looking for test storage... 00:05:24.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:24.803 11:15:07 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:24.803 11:15:07 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:24.803 11:15:07 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:24.803 11:15:07 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:24.803 11:15:07 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.803 11:15:07 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.803 11:15:07 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.803 11:15:07 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.803 11:15:07 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.803 11:15:07 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.803 11:15:07 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.803 11:15:07 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.803 11:15:07 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.803 11:15:07 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.803 11:15:07 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.803 11:15:07 env -- scripts/common.sh@344 -- # case "$op" in 00:05:24.803 11:15:07 env -- scripts/common.sh@345 -- # : 1 00:05:24.803 11:15:07 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.803 11:15:07 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.803 11:15:07 env -- scripts/common.sh@365 -- # decimal 1 00:05:24.803 11:15:07 env -- scripts/common.sh@353 -- # local d=1 00:05:24.803 11:15:07 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.803 11:15:07 env -- scripts/common.sh@355 -- # echo 1 00:05:24.803 11:15:07 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.803 11:15:07 env -- scripts/common.sh@366 -- # decimal 2 00:05:24.803 11:15:07 env -- scripts/common.sh@353 -- # local d=2 00:05:24.803 11:15:07 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.803 11:15:07 env -- scripts/common.sh@355 -- # echo 2 00:05:24.803 11:15:07 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.803 11:15:07 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.803 11:15:07 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.803 11:15:07 env -- scripts/common.sh@368 -- # return 0 00:05:24.803 11:15:07 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.803 11:15:07 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:24.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.803 --rc genhtml_branch_coverage=1 00:05:24.803 --rc genhtml_function_coverage=1 00:05:24.803 --rc genhtml_legend=1 00:05:24.803 --rc geninfo_all_blocks=1 00:05:24.803 --rc geninfo_unexecuted_blocks=1 00:05:24.803 00:05:24.803 ' 00:05:24.803 11:15:07 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:24.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.803 --rc genhtml_branch_coverage=1 00:05:24.803 --rc genhtml_function_coverage=1 00:05:24.803 --rc genhtml_legend=1 00:05:24.803 --rc geninfo_all_blocks=1 00:05:24.803 --rc geninfo_unexecuted_blocks=1 00:05:24.803 00:05:24.803 ' 00:05:24.803 11:15:07 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:24.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.803 --rc genhtml_branch_coverage=1 00:05:24.803 --rc genhtml_function_coverage=1 00:05:24.803 --rc genhtml_legend=1 00:05:24.803 --rc geninfo_all_blocks=1 00:05:24.803 --rc geninfo_unexecuted_blocks=1 00:05:24.803 00:05:24.803 ' 00:05:24.803 11:15:07 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:24.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.803 --rc genhtml_branch_coverage=1 00:05:24.803 --rc genhtml_function_coverage=1 00:05:24.803 --rc genhtml_legend=1 00:05:24.803 --rc geninfo_all_blocks=1 00:05:24.803 --rc geninfo_unexecuted_blocks=1 00:05:24.803 00:05:24.803 ' 00:05:24.803 11:15:07 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:24.803 11:15:07 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.803 11:15:07 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.803 11:15:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.803 ************************************ 00:05:24.803 START TEST env_memory 00:05:24.803 ************************************ 00:05:24.803 11:15:07 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:24.803 00:05:24.803 00:05:24.803 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.803 http://cunit.sourceforge.net/ 00:05:24.803 00:05:24.803 00:05:24.803 Suite: memory 00:05:25.063 Test: alloc and free memory map ...[2024-11-20 11:15:07.931831] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:25.063 passed 00:05:25.063 Test: mem map translation ...[2024-11-20 11:15:07.972424] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:25.063 [2024-11-20 11:15:07.972479] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:25.063 [2024-11-20 11:15:07.972537] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:25.063 [2024-11-20 11:15:07.972556] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:25.063 passed 00:05:25.063 Test: mem map registration ...[2024-11-20 11:15:08.034798] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:25.063 [2024-11-20 11:15:08.034841] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:25.063 passed 00:05:25.063 Test: mem map adjacent registrations ...passed 00:05:25.063 00:05:25.063 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.063 suites 1 1 n/a 0 0 00:05:25.063 tests 4 4 4 0 0 00:05:25.063 asserts 152 152 152 0 n/a 00:05:25.063 00:05:25.063 Elapsed time = 0.228 seconds 00:05:25.063 00:05:25.063 real 0m0.285s 00:05:25.063 user 0m0.249s 00:05:25.063 sys 0m0.029s 00:05:25.063 11:15:08 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.063 11:15:08 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:25.063 ************************************ 00:05:25.063 END TEST env_memory 00:05:25.063 ************************************ 00:05:25.323 11:15:08 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:25.323 11:15:08 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.323 11:15:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.323 11:15:08 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.323 ************************************ 00:05:25.323 START TEST env_vtophys 00:05:25.323 ************************************ 00:05:25.323 11:15:08 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:25.323 EAL: lib.eal log level changed from notice to debug 00:05:25.323 EAL: Detected lcore 0 as core 0 on socket 0 00:05:25.323 EAL: Detected lcore 1 as core 0 on socket 0 00:05:25.323 EAL: Detected lcore 2 as core 0 on socket 0 00:05:25.323 EAL: Detected lcore 3 as core 0 on socket 0 00:05:25.323 EAL: Detected lcore 4 as core 0 on socket 0 00:05:25.323 EAL: Detected lcore 5 as core 0 on socket 0 00:05:25.323 EAL: Detected lcore 6 as core 0 on socket 0 00:05:25.323 EAL: Detected lcore 7 as core 0 on socket 0 00:05:25.323 EAL: Detected lcore 8 as core 0 on socket 0 00:05:25.323 EAL: Detected lcore 9 as core 0 on socket 0 00:05:25.323 EAL: Maximum logical cores by configuration: 128 00:05:25.323 EAL: Detected CPU lcores: 10 00:05:25.323 EAL: Detected NUMA nodes: 1 00:05:25.323 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:25.323 EAL: Detected shared linkage of DPDK 00:05:25.323 EAL: No shared files mode enabled, IPC will be disabled 00:05:25.323 EAL: Selected IOVA mode 'PA' 00:05:25.323 EAL: Probing VFIO support... 00:05:25.323 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:25.323 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:25.323 EAL: Ask a virtual area of 0x2e000 bytes 00:05:25.323 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:25.323 EAL: Setting up physically contiguous memory... 00:05:25.323 EAL: Setting maximum number of open files to 524288 00:05:25.323 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:25.323 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:25.323 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.323 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:25.323 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.323 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.323 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:25.323 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:25.323 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.323 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:25.323 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.323 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.323 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:25.323 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:25.323 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.323 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:25.323 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.323 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.323 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:25.323 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:25.323 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.323 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:25.323 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.323 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.323 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:25.323 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:25.323 EAL: Hugepages will be freed exactly as allocated. 00:05:25.323 EAL: No shared files mode enabled, IPC is disabled 00:05:25.323 EAL: No shared files mode enabled, IPC is disabled 00:05:25.323 EAL: TSC frequency is ~2290000 KHz 00:05:25.323 EAL: Main lcore 0 is ready (tid=7fd73dc11a40;cpuset=[0]) 00:05:25.323 EAL: Trying to obtain current memory policy. 00:05:25.323 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.323 EAL: Restoring previous memory policy: 0 00:05:25.323 EAL: request: mp_malloc_sync 00:05:25.323 EAL: No shared files mode enabled, IPC is disabled 00:05:25.323 EAL: Heap on socket 0 was expanded by 2MB 00:05:25.323 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:25.323 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:25.323 EAL: Mem event callback 'spdk:(nil)' registered 00:05:25.323 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:25.323 00:05:25.323 00:05:25.323 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.323 http://cunit.sourceforge.net/ 00:05:25.323 00:05:25.323 00:05:25.323 Suite: components_suite 00:05:25.893 Test: vtophys_malloc_test ...passed 00:05:25.893 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:25.893 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.893 EAL: Restoring previous memory policy: 4 00:05:25.893 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.893 EAL: request: mp_malloc_sync 00:05:25.893 EAL: No shared files mode enabled, IPC is disabled 00:05:25.893 EAL: Heap on socket 0 was expanded by 4MB 00:05:25.893 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.893 EAL: request: mp_malloc_sync 00:05:25.893 EAL: No shared files mode enabled, IPC is disabled 00:05:25.893 EAL: Heap on socket 0 was shrunk by 4MB 00:05:25.893 EAL: Trying to obtain current memory policy. 00:05:25.893 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.893 EAL: Restoring previous memory policy: 4 00:05:25.893 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.893 EAL: request: mp_malloc_sync 00:05:25.893 EAL: No shared files mode enabled, IPC is disabled 00:05:25.893 EAL: Heap on socket 0 was expanded by 6MB 00:05:25.893 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.893 EAL: request: mp_malloc_sync 00:05:25.893 EAL: No shared files mode enabled, IPC is disabled 00:05:25.893 EAL: Heap on socket 0 was shrunk by 6MB 00:05:25.893 EAL: Trying to obtain current memory policy. 00:05:25.893 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.893 EAL: Restoring previous memory policy: 4 00:05:25.893 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.893 EAL: request: mp_malloc_sync 00:05:25.893 EAL: No shared files mode enabled, IPC is disabled 00:05:25.893 EAL: Heap on socket 0 was expanded by 10MB 00:05:25.893 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.893 EAL: request: mp_malloc_sync 00:05:25.893 EAL: No shared files mode enabled, IPC is disabled 00:05:25.893 EAL: Heap on socket 0 was shrunk by 10MB 00:05:25.893 EAL: Trying to obtain current memory policy. 00:05:25.893 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.893 EAL: Restoring previous memory policy: 4 00:05:25.893 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.893 EAL: request: mp_malloc_sync 00:05:25.893 EAL: No shared files mode enabled, IPC is disabled 00:05:25.893 EAL: Heap on socket 0 was expanded by 18MB 00:05:25.893 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.893 EAL: request: mp_malloc_sync 00:05:25.893 EAL: No shared files mode enabled, IPC is disabled 00:05:25.893 EAL: Heap on socket 0 was shrunk by 18MB 00:05:26.153 EAL: Trying to obtain current memory policy. 00:05:26.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.153 EAL: Restoring previous memory policy: 4 00:05:26.153 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.153 EAL: request: mp_malloc_sync 00:05:26.153 EAL: No shared files mode enabled, IPC is disabled 00:05:26.153 EAL: Heap on socket 0 was expanded by 34MB 00:05:26.153 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.153 EAL: request: mp_malloc_sync 00:05:26.153 EAL: No shared files mode enabled, IPC is disabled 00:05:26.153 EAL: Heap on socket 0 was shrunk by 34MB 00:05:26.153 EAL: Trying to obtain current memory policy. 00:05:26.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.153 EAL: Restoring previous memory policy: 4 00:05:26.153 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.153 EAL: request: mp_malloc_sync 00:05:26.153 EAL: No shared files mode enabled, IPC is disabled 00:05:26.153 EAL: Heap on socket 0 was expanded by 66MB 00:05:26.412 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.412 EAL: request: mp_malloc_sync 00:05:26.412 EAL: No shared files mode enabled, IPC is disabled 00:05:26.412 EAL: Heap on socket 0 was shrunk by 66MB 00:05:26.412 EAL: Trying to obtain current memory policy. 00:05:26.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.412 EAL: Restoring previous memory policy: 4 00:05:26.412 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.412 EAL: request: mp_malloc_sync 00:05:26.412 EAL: No shared files mode enabled, IPC is disabled 00:05:26.412 EAL: Heap on socket 0 was expanded by 130MB 00:05:26.671 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.671 EAL: request: mp_malloc_sync 00:05:26.671 EAL: No shared files mode enabled, IPC is disabled 00:05:26.671 EAL: Heap on socket 0 was shrunk by 130MB 00:05:26.931 EAL: Trying to obtain current memory policy. 00:05:26.931 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.190 EAL: Restoring previous memory policy: 4 00:05:27.190 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.190 EAL: request: mp_malloc_sync 00:05:27.190 EAL: No shared files mode enabled, IPC is disabled 00:05:27.190 EAL: Heap on socket 0 was expanded by 258MB 00:05:27.762 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.762 EAL: request: mp_malloc_sync 00:05:27.762 EAL: No shared files mode enabled, IPC is disabled 00:05:27.762 EAL: Heap on socket 0 was shrunk by 258MB 00:05:28.051 EAL: Trying to obtain current memory policy. 00:05:28.051 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.309 EAL: Restoring previous memory policy: 4 00:05:28.309 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.309 EAL: request: mp_malloc_sync 00:05:28.309 EAL: No shared files mode enabled, IPC is disabled 00:05:28.309 EAL: Heap on socket 0 was expanded by 514MB 00:05:29.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.505 EAL: request: mp_malloc_sync 00:05:29.505 EAL: No shared files mode enabled, IPC is disabled 00:05:29.505 EAL: Heap on socket 0 was shrunk by 514MB 00:05:30.441 EAL: Trying to obtain current memory policy. 00:05:30.441 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.700 EAL: Restoring previous memory policy: 4 00:05:30.700 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.700 EAL: request: mp_malloc_sync 00:05:30.700 EAL: No shared files mode enabled, IPC is disabled 00:05:30.700 EAL: Heap on socket 0 was expanded by 1026MB 00:05:32.639 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.208 EAL: request: mp_malloc_sync 00:05:33.208 EAL: No shared files mode enabled, IPC is disabled 00:05:33.208 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:35.114 passed 00:05:35.114 00:05:35.114 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.114 suites 1 1 n/a 0 0 00:05:35.114 tests 2 2 2 0 0 00:05:35.114 asserts 5719 5719 5719 0 n/a 00:05:35.114 00:05:35.114 Elapsed time = 9.245 seconds 00:05:35.114 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.114 EAL: request: mp_malloc_sync 00:05:35.114 EAL: No shared files mode enabled, IPC is disabled 00:05:35.114 EAL: Heap on socket 0 was shrunk by 2MB 00:05:35.114 EAL: No shared files mode enabled, IPC is disabled 00:05:35.114 EAL: No shared files mode enabled, IPC is disabled 00:05:35.114 EAL: No shared files mode enabled, IPC is disabled 00:05:35.114 00:05:35.114 real 0m9.560s 00:05:35.114 user 0m8.150s 00:05:35.114 sys 0m1.251s 00:05:35.114 11:15:17 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.114 11:15:17 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:35.114 ************************************ 00:05:35.114 END TEST env_vtophys 00:05:35.114 ************************************ 00:05:35.114 11:15:17 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:35.114 11:15:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.114 11:15:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.114 11:15:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.114 ************************************ 00:05:35.114 START TEST env_pci 00:05:35.114 ************************************ 00:05:35.114 11:15:17 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:35.114 00:05:35.114 00:05:35.114 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.114 http://cunit.sourceforge.net/ 00:05:35.114 00:05:35.114 00:05:35.114 Suite: pci 00:05:35.114 Test: pci_hook ...[2024-11-20 11:15:17.872557] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56788 has claimed it 00:05:35.114 passed 00:05:35.114 00:05:35.114 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.114 suites 1 1 n/a 0 0 00:05:35.114 tests 1 1 1 0 0 00:05:35.114 asserts 25 25 25 0 n/a 00:05:35.114 00:05:35.114 Elapsed time = 0.007 seconds 00:05:35.114 EAL: Cannot find device (10000:00:01.0) 00:05:35.114 EAL: Failed to attach device on primary process 00:05:35.114 00:05:35.114 real 0m0.108s 00:05:35.114 user 0m0.051s 00:05:35.114 sys 0m0.056s 00:05:35.114 11:15:17 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.114 11:15:17 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:35.114 ************************************ 00:05:35.114 END TEST env_pci 00:05:35.114 ************************************ 00:05:35.114 11:15:17 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:35.114 11:15:17 env -- env/env.sh@15 -- # uname 00:05:35.114 11:15:17 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:35.114 11:15:17 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:35.114 11:15:17 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:35.114 11:15:17 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:35.114 11:15:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.114 11:15:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.114 ************************************ 00:05:35.114 START TEST env_dpdk_post_init 00:05:35.114 ************************************ 00:05:35.114 11:15:18 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:35.114 EAL: Detected CPU lcores: 10 00:05:35.114 EAL: Detected NUMA nodes: 1 00:05:35.114 EAL: Detected shared linkage of DPDK 00:05:35.114 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:35.114 EAL: Selected IOVA mode 'PA' 00:05:35.114 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:35.374 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:35.374 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:35.374 Starting DPDK initialization... 00:05:35.374 Starting SPDK post initialization... 00:05:35.374 SPDK NVMe probe 00:05:35.374 Attaching to 0000:00:10.0 00:05:35.374 Attaching to 0000:00:11.0 00:05:35.374 Attached to 0000:00:10.0 00:05:35.374 Attached to 0000:00:11.0 00:05:35.374 Cleaning up... 00:05:35.374 00:05:35.374 real 0m0.302s 00:05:35.374 user 0m0.107s 00:05:35.374 sys 0m0.096s 00:05:35.374 11:15:18 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.374 11:15:18 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:35.374 ************************************ 00:05:35.374 END TEST env_dpdk_post_init 00:05:35.374 ************************************ 00:05:35.374 11:15:18 env -- env/env.sh@26 -- # uname 00:05:35.374 11:15:18 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:35.374 11:15:18 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:35.374 11:15:18 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.374 11:15:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.374 11:15:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.374 ************************************ 00:05:35.374 START TEST env_mem_callbacks 00:05:35.374 ************************************ 00:05:35.374 11:15:18 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:35.374 EAL: Detected CPU lcores: 10 00:05:35.374 EAL: Detected NUMA nodes: 1 00:05:35.374 EAL: Detected shared linkage of DPDK 00:05:35.374 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:35.374 EAL: Selected IOVA mode 'PA' 00:05:35.634 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:35.634 00:05:35.634 00:05:35.634 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.634 http://cunit.sourceforge.net/ 00:05:35.634 00:05:35.634 00:05:35.634 Suite: memory 00:05:35.634 Test: test ... 00:05:35.634 register 0x200000200000 2097152 00:05:35.634 malloc 3145728 00:05:35.634 register 0x200000400000 4194304 00:05:35.634 buf 0x2000004fffc0 len 3145728 PASSED 00:05:35.634 malloc 64 00:05:35.634 buf 0x2000004ffec0 len 64 PASSED 00:05:35.634 malloc 4194304 00:05:35.634 register 0x200000800000 6291456 00:05:35.634 buf 0x2000009fffc0 len 4194304 PASSED 00:05:35.634 free 0x2000004fffc0 3145728 00:05:35.634 free 0x2000004ffec0 64 00:05:35.634 unregister 0x200000400000 4194304 PASSED 00:05:35.634 free 0x2000009fffc0 4194304 00:05:35.634 unregister 0x200000800000 6291456 PASSED 00:05:35.634 malloc 8388608 00:05:35.634 register 0x200000400000 10485760 00:05:35.634 buf 0x2000005fffc0 len 8388608 PASSED 00:05:35.634 free 0x2000005fffc0 8388608 00:05:35.634 unregister 0x200000400000 10485760 PASSED 00:05:35.634 passed 00:05:35.634 00:05:35.634 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.634 suites 1 1 n/a 0 0 00:05:35.634 tests 1 1 1 0 0 00:05:35.634 asserts 15 15 15 0 n/a 00:05:35.634 00:05:35.635 Elapsed time = 0.085 seconds 00:05:35.635 00:05:35.635 real 0m0.284s 00:05:35.635 user 0m0.113s 00:05:35.635 sys 0m0.070s 00:05:35.635 11:15:18 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.635 11:15:18 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:35.635 ************************************ 00:05:35.635 END TEST env_mem_callbacks 00:05:35.635 ************************************ 00:05:35.635 00:05:35.635 real 0m11.109s 00:05:35.635 user 0m8.896s 00:05:35.635 sys 0m1.866s 00:05:35.635 11:15:18 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.635 11:15:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.635 ************************************ 00:05:35.635 END TEST env 00:05:35.635 ************************************ 00:05:35.895 11:15:18 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:35.895 11:15:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.895 11:15:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.895 11:15:18 -- common/autotest_common.sh@10 -- # set +x 00:05:35.895 ************************************ 00:05:35.895 START TEST rpc 00:05:35.895 ************************************ 00:05:35.895 11:15:18 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:35.895 * Looking for test storage... 00:05:35.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:35.895 11:15:18 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.895 11:15:18 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.895 11:15:18 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.895 11:15:19 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.895 11:15:19 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.895 11:15:19 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.895 11:15:19 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.895 11:15:19 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.895 11:15:19 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.895 11:15:19 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.895 11:15:19 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.895 11:15:19 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.895 11:15:19 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.895 11:15:19 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.895 11:15:19 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.895 11:15:19 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:35.895 11:15:19 rpc -- scripts/common.sh@345 -- # : 1 00:05:35.895 11:15:19 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.895 11:15:19 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.895 11:15:19 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:36.155 11:15:19 rpc -- scripts/common.sh@353 -- # local d=1 00:05:36.155 11:15:19 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.155 11:15:19 rpc -- scripts/common.sh@355 -- # echo 1 00:05:36.155 11:15:19 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.155 11:15:19 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:36.155 11:15:19 rpc -- scripts/common.sh@353 -- # local d=2 00:05:36.155 11:15:19 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.155 11:15:19 rpc -- scripts/common.sh@355 -- # echo 2 00:05:36.155 11:15:19 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.155 11:15:19 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.155 11:15:19 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.155 11:15:19 rpc -- scripts/common.sh@368 -- # return 0 00:05:36.155 11:15:19 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.155 11:15:19 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:36.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.155 --rc genhtml_branch_coverage=1 00:05:36.155 --rc genhtml_function_coverage=1 00:05:36.155 --rc genhtml_legend=1 00:05:36.155 --rc geninfo_all_blocks=1 00:05:36.155 --rc geninfo_unexecuted_blocks=1 00:05:36.155 00:05:36.155 ' 00:05:36.155 11:15:19 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:36.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.155 --rc genhtml_branch_coverage=1 00:05:36.155 --rc genhtml_function_coverage=1 00:05:36.155 --rc genhtml_legend=1 00:05:36.155 --rc geninfo_all_blocks=1 00:05:36.155 --rc geninfo_unexecuted_blocks=1 00:05:36.155 00:05:36.155 ' 00:05:36.155 11:15:19 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:36.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.155 --rc genhtml_branch_coverage=1 00:05:36.155 --rc genhtml_function_coverage=1 00:05:36.155 --rc genhtml_legend=1 00:05:36.155 --rc geninfo_all_blocks=1 00:05:36.155 --rc geninfo_unexecuted_blocks=1 00:05:36.155 00:05:36.155 ' 00:05:36.155 11:15:19 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:36.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.155 --rc genhtml_branch_coverage=1 00:05:36.155 --rc genhtml_function_coverage=1 00:05:36.155 --rc genhtml_legend=1 00:05:36.155 --rc geninfo_all_blocks=1 00:05:36.155 --rc geninfo_unexecuted_blocks=1 00:05:36.155 00:05:36.155 ' 00:05:36.155 11:15:19 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56915 00:05:36.155 11:15:19 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.155 11:15:19 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56915 00:05:36.155 11:15:19 rpc -- common/autotest_common.sh@835 -- # '[' -z 56915 ']' 00:05:36.155 11:15:19 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.155 11:15:19 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.155 11:15:19 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:36.155 11:15:19 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.155 11:15:19 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.155 11:15:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.155 [2024-11-20 11:15:19.127728] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:05:36.155 [2024-11-20 11:15:19.127857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56915 ] 00:05:36.414 [2024-11-20 11:15:19.303380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.414 [2024-11-20 11:15:19.424060] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:36.414 [2024-11-20 11:15:19.424128] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56915' to capture a snapshot of events at runtime. 00:05:36.414 [2024-11-20 11:15:19.424138] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:36.414 [2024-11-20 11:15:19.424148] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:36.414 [2024-11-20 11:15:19.424156] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56915 for offline analysis/debug. 00:05:36.414 [2024-11-20 11:15:19.425399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.351 11:15:20 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.351 11:15:20 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:37.351 11:15:20 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:37.352 11:15:20 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:37.352 11:15:20 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:37.352 11:15:20 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:37.352 11:15:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.352 11:15:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.352 11:15:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.352 ************************************ 00:05:37.352 START TEST rpc_integrity 00:05:37.352 ************************************ 00:05:37.352 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:37.352 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:37.352 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.352 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.352 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.352 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:37.352 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:37.352 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:37.352 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:37.352 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.352 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.352 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.352 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:37.352 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:37.352 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.352 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.352 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.352 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:37.352 { 00:05:37.352 "name": "Malloc0", 00:05:37.352 "aliases": [ 00:05:37.352 "4b702eb4-0f5d-4abd-9005-743b7b1eeb2b" 00:05:37.352 ], 00:05:37.352 "product_name": "Malloc disk", 00:05:37.352 "block_size": 512, 00:05:37.352 "num_blocks": 16384, 00:05:37.352 "uuid": "4b702eb4-0f5d-4abd-9005-743b7b1eeb2b", 00:05:37.352 "assigned_rate_limits": { 00:05:37.352 "rw_ios_per_sec": 0, 00:05:37.352 "rw_mbytes_per_sec": 0, 00:05:37.352 "r_mbytes_per_sec": 0, 00:05:37.352 "w_mbytes_per_sec": 0 00:05:37.352 }, 00:05:37.352 "claimed": false, 00:05:37.352 "zoned": false, 00:05:37.352 "supported_io_types": { 00:05:37.352 "read": true, 00:05:37.352 "write": true, 00:05:37.352 "unmap": true, 00:05:37.352 "flush": true, 00:05:37.352 "reset": true, 00:05:37.352 "nvme_admin": false, 00:05:37.352 "nvme_io": false, 00:05:37.352 "nvme_io_md": false, 00:05:37.352 "write_zeroes": true, 00:05:37.352 "zcopy": true, 00:05:37.352 "get_zone_info": false, 00:05:37.352 "zone_management": false, 00:05:37.352 "zone_append": false, 00:05:37.352 "compare": false, 00:05:37.352 "compare_and_write": false, 00:05:37.352 "abort": true, 00:05:37.352 "seek_hole": false, 00:05:37.352 "seek_data": false, 00:05:37.352 "copy": true, 00:05:37.352 "nvme_iov_md": false 00:05:37.352 }, 00:05:37.352 "memory_domains": [ 00:05:37.352 { 00:05:37.352 "dma_device_id": "system", 00:05:37.352 "dma_device_type": 1 00:05:37.352 }, 00:05:37.352 { 00:05:37.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.352 "dma_device_type": 2 00:05:37.352 } 00:05:37.352 ], 00:05:37.352 "driver_specific": {} 00:05:37.352 } 00:05:37.352 ]' 00:05:37.352 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:37.612 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:37.612 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:37.612 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.612 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.612 [2024-11-20 11:15:20.514963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:37.612 [2024-11-20 11:15:20.515041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:37.612 [2024-11-20 11:15:20.515071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:37.612 [2024-11-20 11:15:20.515090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:37.612 [2024-11-20 11:15:20.517642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:37.612 [2024-11-20 11:15:20.517689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:37.612 Passthru0 00:05:37.612 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.612 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:37.612 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.612 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.612 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.612 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:37.612 { 00:05:37.612 "name": "Malloc0", 00:05:37.612 "aliases": [ 00:05:37.612 "4b702eb4-0f5d-4abd-9005-743b7b1eeb2b" 00:05:37.612 ], 00:05:37.612 "product_name": "Malloc disk", 00:05:37.612 "block_size": 512, 00:05:37.612 "num_blocks": 16384, 00:05:37.612 "uuid": "4b702eb4-0f5d-4abd-9005-743b7b1eeb2b", 00:05:37.612 "assigned_rate_limits": { 00:05:37.612 "rw_ios_per_sec": 0, 00:05:37.612 "rw_mbytes_per_sec": 0, 00:05:37.612 "r_mbytes_per_sec": 0, 00:05:37.612 "w_mbytes_per_sec": 0 00:05:37.612 }, 00:05:37.612 "claimed": true, 00:05:37.612 "claim_type": "exclusive_write", 00:05:37.612 "zoned": false, 00:05:37.612 "supported_io_types": { 00:05:37.612 "read": true, 00:05:37.612 "write": true, 00:05:37.612 "unmap": true, 00:05:37.612 "flush": true, 00:05:37.612 "reset": true, 00:05:37.612 "nvme_admin": false, 00:05:37.612 "nvme_io": false, 00:05:37.612 "nvme_io_md": false, 00:05:37.612 "write_zeroes": true, 00:05:37.612 "zcopy": true, 00:05:37.612 "get_zone_info": false, 00:05:37.612 "zone_management": false, 00:05:37.612 "zone_append": false, 00:05:37.612 "compare": false, 00:05:37.612 "compare_and_write": false, 00:05:37.612 "abort": true, 00:05:37.612 "seek_hole": false, 00:05:37.612 "seek_data": false, 00:05:37.612 "copy": true, 00:05:37.612 "nvme_iov_md": false 00:05:37.612 }, 00:05:37.612 "memory_domains": [ 00:05:37.612 { 00:05:37.612 "dma_device_id": "system", 00:05:37.612 "dma_device_type": 1 00:05:37.612 }, 00:05:37.612 { 00:05:37.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.612 "dma_device_type": 2 00:05:37.612 } 00:05:37.612 ], 00:05:37.612 "driver_specific": {} 00:05:37.612 }, 00:05:37.612 { 00:05:37.612 "name": "Passthru0", 00:05:37.612 "aliases": [ 00:05:37.612 "351b559f-66a2-5ad3-af13-2934381766e0" 00:05:37.612 ], 00:05:37.612 "product_name": "passthru", 00:05:37.612 "block_size": 512, 00:05:37.612 "num_blocks": 16384, 00:05:37.612 "uuid": "351b559f-66a2-5ad3-af13-2934381766e0", 00:05:37.612 "assigned_rate_limits": { 00:05:37.612 "rw_ios_per_sec": 0, 00:05:37.612 "rw_mbytes_per_sec": 0, 00:05:37.612 "r_mbytes_per_sec": 0, 00:05:37.612 "w_mbytes_per_sec": 0 00:05:37.612 }, 00:05:37.612 "claimed": false, 00:05:37.612 "zoned": false, 00:05:37.612 "supported_io_types": { 00:05:37.612 "read": true, 00:05:37.612 "write": true, 00:05:37.612 "unmap": true, 00:05:37.612 "flush": true, 00:05:37.612 "reset": true, 00:05:37.612 "nvme_admin": false, 00:05:37.612 "nvme_io": false, 00:05:37.612 "nvme_io_md": false, 00:05:37.612 "write_zeroes": true, 00:05:37.612 "zcopy": true, 00:05:37.612 "get_zone_info": false, 00:05:37.612 "zone_management": false, 00:05:37.612 "zone_append": false, 00:05:37.612 "compare": false, 00:05:37.612 "compare_and_write": false, 00:05:37.612 "abort": true, 00:05:37.612 "seek_hole": false, 00:05:37.612 "seek_data": false, 00:05:37.612 "copy": true, 00:05:37.612 "nvme_iov_md": false 00:05:37.612 }, 00:05:37.612 "memory_domains": [ 00:05:37.612 { 00:05:37.612 "dma_device_id": "system", 00:05:37.612 "dma_device_type": 1 00:05:37.612 }, 00:05:37.612 { 00:05:37.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.612 "dma_device_type": 2 00:05:37.612 } 00:05:37.612 ], 00:05:37.612 "driver_specific": { 00:05:37.612 "passthru": { 00:05:37.612 "name": "Passthru0", 00:05:37.612 "base_bdev_name": "Malloc0" 00:05:37.612 } 00:05:37.612 } 00:05:37.612 } 00:05:37.612 ]' 00:05:37.612 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:37.612 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:37.612 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:37.612 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.612 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.612 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.613 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:37.613 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.613 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.613 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.613 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:37.613 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.613 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.613 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.613 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:37.613 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:37.613 11:15:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:37.613 00:05:37.613 real 0m0.374s 00:05:37.613 user 0m0.203s 00:05:37.613 sys 0m0.057s 00:05:37.613 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.613 11:15:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.613 ************************************ 00:05:37.613 END TEST rpc_integrity 00:05:37.613 ************************************ 00:05:37.873 11:15:20 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:37.873 11:15:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.873 11:15:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.873 11:15:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.873 ************************************ 00:05:37.873 START TEST rpc_plugins 00:05:37.873 ************************************ 00:05:37.873 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:37.873 11:15:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:37.873 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.873 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.873 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.873 11:15:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:37.873 11:15:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:37.873 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.873 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.873 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.873 11:15:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:37.873 { 00:05:37.873 "name": "Malloc1", 00:05:37.873 "aliases": [ 00:05:37.873 "2f24d36f-50b9-4b18-8e4f-29f525509280" 00:05:37.873 ], 00:05:37.873 "product_name": "Malloc disk", 00:05:37.873 "block_size": 4096, 00:05:37.873 "num_blocks": 256, 00:05:37.873 "uuid": "2f24d36f-50b9-4b18-8e4f-29f525509280", 00:05:37.873 "assigned_rate_limits": { 00:05:37.873 "rw_ios_per_sec": 0, 00:05:37.873 "rw_mbytes_per_sec": 0, 00:05:37.873 "r_mbytes_per_sec": 0, 00:05:37.873 "w_mbytes_per_sec": 0 00:05:37.873 }, 00:05:37.873 "claimed": false, 00:05:37.873 "zoned": false, 00:05:37.873 "supported_io_types": { 00:05:37.873 "read": true, 00:05:37.873 "write": true, 00:05:37.873 "unmap": true, 00:05:37.873 "flush": true, 00:05:37.873 "reset": true, 00:05:37.873 "nvme_admin": false, 00:05:37.873 "nvme_io": false, 00:05:37.873 "nvme_io_md": false, 00:05:37.873 "write_zeroes": true, 00:05:37.873 "zcopy": true, 00:05:37.873 "get_zone_info": false, 00:05:37.873 "zone_management": false, 00:05:37.873 "zone_append": false, 00:05:37.873 "compare": false, 00:05:37.873 "compare_and_write": false, 00:05:37.873 "abort": true, 00:05:37.873 "seek_hole": false, 00:05:37.873 "seek_data": false, 00:05:37.873 "copy": true, 00:05:37.873 "nvme_iov_md": false 00:05:37.873 }, 00:05:37.873 "memory_domains": [ 00:05:37.873 { 00:05:37.873 "dma_device_id": "system", 00:05:37.873 "dma_device_type": 1 00:05:37.873 }, 00:05:37.873 { 00:05:37.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.873 "dma_device_type": 2 00:05:37.873 } 00:05:37.873 ], 00:05:37.873 "driver_specific": {} 00:05:37.873 } 00:05:37.873 ]' 00:05:37.873 11:15:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:37.873 11:15:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:37.873 11:15:20 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:37.873 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.873 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.873 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.873 11:15:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:37.873 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.873 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.873 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.873 11:15:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:37.873 11:15:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:37.873 11:15:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:37.873 00:05:37.873 real 0m0.166s 00:05:37.873 user 0m0.085s 00:05:37.873 sys 0m0.033s 00:05:37.873 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.873 11:15:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.873 ************************************ 00:05:37.873 END TEST rpc_plugins 00:05:37.873 ************************************ 00:05:38.134 11:15:20 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:38.134 11:15:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.134 11:15:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.134 11:15:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.134 ************************************ 00:05:38.134 START TEST rpc_trace_cmd_test 00:05:38.134 ************************************ 00:05:38.134 11:15:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:38.134 11:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:38.134 11:15:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:38.134 11:15:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.134 11:15:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:38.134 11:15:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.134 11:15:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:38.134 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56915", 00:05:38.134 "tpoint_group_mask": "0x8", 00:05:38.134 "iscsi_conn": { 00:05:38.134 "mask": "0x2", 00:05:38.134 "tpoint_mask": "0x0" 00:05:38.134 }, 00:05:38.134 "scsi": { 00:05:38.134 "mask": "0x4", 00:05:38.134 "tpoint_mask": "0x0" 00:05:38.134 }, 00:05:38.134 "bdev": { 00:05:38.134 "mask": "0x8", 00:05:38.134 "tpoint_mask": "0xffffffffffffffff" 00:05:38.134 }, 00:05:38.134 "nvmf_rdma": { 00:05:38.134 "mask": "0x10", 00:05:38.134 "tpoint_mask": "0x0" 00:05:38.134 }, 00:05:38.134 "nvmf_tcp": { 00:05:38.134 "mask": "0x20", 00:05:38.134 "tpoint_mask": "0x0" 00:05:38.134 }, 00:05:38.134 "ftl": { 00:05:38.134 "mask": "0x40", 00:05:38.134 "tpoint_mask": "0x0" 00:05:38.134 }, 00:05:38.134 "blobfs": { 00:05:38.134 "mask": "0x80", 00:05:38.134 "tpoint_mask": "0x0" 00:05:38.134 }, 00:05:38.134 "dsa": { 00:05:38.134 "mask": "0x200", 00:05:38.134 "tpoint_mask": "0x0" 00:05:38.134 }, 00:05:38.134 "thread": { 00:05:38.135 "mask": "0x400", 00:05:38.135 "tpoint_mask": "0x0" 00:05:38.135 }, 00:05:38.135 "nvme_pcie": { 00:05:38.135 "mask": "0x800", 00:05:38.135 "tpoint_mask": "0x0" 00:05:38.135 }, 00:05:38.135 "iaa": { 00:05:38.135 "mask": "0x1000", 00:05:38.135 "tpoint_mask": "0x0" 00:05:38.135 }, 00:05:38.135 "nvme_tcp": { 00:05:38.135 "mask": "0x2000", 00:05:38.135 "tpoint_mask": "0x0" 00:05:38.135 }, 00:05:38.135 "bdev_nvme": { 00:05:38.135 "mask": "0x4000", 00:05:38.135 "tpoint_mask": "0x0" 00:05:38.135 }, 00:05:38.135 "sock": { 00:05:38.135 "mask": "0x8000", 00:05:38.135 "tpoint_mask": "0x0" 00:05:38.135 }, 00:05:38.135 "blob": { 00:05:38.135 "mask": "0x10000", 00:05:38.135 "tpoint_mask": "0x0" 00:05:38.135 }, 00:05:38.135 "bdev_raid": { 00:05:38.135 "mask": "0x20000", 00:05:38.135 "tpoint_mask": "0x0" 00:05:38.135 }, 00:05:38.135 "scheduler": { 00:05:38.135 "mask": "0x40000", 00:05:38.135 "tpoint_mask": "0x0" 00:05:38.135 } 00:05:38.135 }' 00:05:38.135 11:15:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:38.135 11:15:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:38.135 11:15:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:38.135 11:15:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:38.135 11:15:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:38.135 11:15:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:38.135 11:15:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:38.135 11:15:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:38.135 11:15:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:38.135 11:15:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:38.135 00:05:38.135 real 0m0.214s 00:05:38.135 user 0m0.164s 00:05:38.135 sys 0m0.040s 00:05:38.135 11:15:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.135 11:15:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:38.135 ************************************ 00:05:38.135 END TEST rpc_trace_cmd_test 00:05:38.135 ************************************ 00:05:38.395 11:15:21 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:38.395 11:15:21 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:38.395 11:15:21 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:38.395 11:15:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.395 11:15:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.395 11:15:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.395 ************************************ 00:05:38.395 START TEST rpc_daemon_integrity 00:05:38.395 ************************************ 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:38.395 { 00:05:38.395 "name": "Malloc2", 00:05:38.395 "aliases": [ 00:05:38.395 "1c207682-6381-4332-b063-261d116a3040" 00:05:38.395 ], 00:05:38.395 "product_name": "Malloc disk", 00:05:38.395 "block_size": 512, 00:05:38.395 "num_blocks": 16384, 00:05:38.395 "uuid": "1c207682-6381-4332-b063-261d116a3040", 00:05:38.395 "assigned_rate_limits": { 00:05:38.395 "rw_ios_per_sec": 0, 00:05:38.395 "rw_mbytes_per_sec": 0, 00:05:38.395 "r_mbytes_per_sec": 0, 00:05:38.395 "w_mbytes_per_sec": 0 00:05:38.395 }, 00:05:38.395 "claimed": false, 00:05:38.395 "zoned": false, 00:05:38.395 "supported_io_types": { 00:05:38.395 "read": true, 00:05:38.395 "write": true, 00:05:38.395 "unmap": true, 00:05:38.395 "flush": true, 00:05:38.395 "reset": true, 00:05:38.395 "nvme_admin": false, 00:05:38.395 "nvme_io": false, 00:05:38.395 "nvme_io_md": false, 00:05:38.395 "write_zeroes": true, 00:05:38.395 "zcopy": true, 00:05:38.395 "get_zone_info": false, 00:05:38.395 "zone_management": false, 00:05:38.395 "zone_append": false, 00:05:38.395 "compare": false, 00:05:38.395 "compare_and_write": false, 00:05:38.395 "abort": true, 00:05:38.395 "seek_hole": false, 00:05:38.395 "seek_data": false, 00:05:38.395 "copy": true, 00:05:38.395 "nvme_iov_md": false 00:05:38.395 }, 00:05:38.395 "memory_domains": [ 00:05:38.395 { 00:05:38.395 "dma_device_id": "system", 00:05:38.395 "dma_device_type": 1 00:05:38.395 }, 00:05:38.395 { 00:05:38.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.395 "dma_device_type": 2 00:05:38.395 } 00:05:38.395 ], 00:05:38.395 "driver_specific": {} 00:05:38.395 } 00:05:38.395 ]' 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.395 [2024-11-20 11:15:21.443437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:38.395 [2024-11-20 11:15:21.443545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:38.395 [2024-11-20 11:15:21.443597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:38.395 [2024-11-20 11:15:21.443611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:38.395 [2024-11-20 11:15:21.446129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:38.395 [2024-11-20 11:15:21.446179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:38.395 Passthru0 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.395 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:38.395 { 00:05:38.395 "name": "Malloc2", 00:05:38.395 "aliases": [ 00:05:38.395 "1c207682-6381-4332-b063-261d116a3040" 00:05:38.395 ], 00:05:38.395 "product_name": "Malloc disk", 00:05:38.395 "block_size": 512, 00:05:38.395 "num_blocks": 16384, 00:05:38.395 "uuid": "1c207682-6381-4332-b063-261d116a3040", 00:05:38.395 "assigned_rate_limits": { 00:05:38.395 "rw_ios_per_sec": 0, 00:05:38.395 "rw_mbytes_per_sec": 0, 00:05:38.395 "r_mbytes_per_sec": 0, 00:05:38.395 "w_mbytes_per_sec": 0 00:05:38.395 }, 00:05:38.395 "claimed": true, 00:05:38.396 "claim_type": "exclusive_write", 00:05:38.396 "zoned": false, 00:05:38.396 "supported_io_types": { 00:05:38.396 "read": true, 00:05:38.396 "write": true, 00:05:38.396 "unmap": true, 00:05:38.396 "flush": true, 00:05:38.396 "reset": true, 00:05:38.396 "nvme_admin": false, 00:05:38.396 "nvme_io": false, 00:05:38.396 "nvme_io_md": false, 00:05:38.396 "write_zeroes": true, 00:05:38.396 "zcopy": true, 00:05:38.396 "get_zone_info": false, 00:05:38.396 "zone_management": false, 00:05:38.396 "zone_append": false, 00:05:38.396 "compare": false, 00:05:38.396 "compare_and_write": false, 00:05:38.396 "abort": true, 00:05:38.396 "seek_hole": false, 00:05:38.396 "seek_data": false, 00:05:38.396 "copy": true, 00:05:38.396 "nvme_iov_md": false 00:05:38.396 }, 00:05:38.396 "memory_domains": [ 00:05:38.396 { 00:05:38.396 "dma_device_id": "system", 00:05:38.396 "dma_device_type": 1 00:05:38.396 }, 00:05:38.396 { 00:05:38.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.396 "dma_device_type": 2 00:05:38.396 } 00:05:38.396 ], 00:05:38.396 "driver_specific": {} 00:05:38.396 }, 00:05:38.396 { 00:05:38.396 "name": "Passthru0", 00:05:38.396 "aliases": [ 00:05:38.396 "28b85b70-ffcb-5bb9-b095-bf629739156c" 00:05:38.396 ], 00:05:38.396 "product_name": "passthru", 00:05:38.396 "block_size": 512, 00:05:38.396 "num_blocks": 16384, 00:05:38.396 "uuid": "28b85b70-ffcb-5bb9-b095-bf629739156c", 00:05:38.396 "assigned_rate_limits": { 00:05:38.396 "rw_ios_per_sec": 0, 00:05:38.396 "rw_mbytes_per_sec": 0, 00:05:38.396 "r_mbytes_per_sec": 0, 00:05:38.396 "w_mbytes_per_sec": 0 00:05:38.396 }, 00:05:38.396 "claimed": false, 00:05:38.396 "zoned": false, 00:05:38.396 "supported_io_types": { 00:05:38.396 "read": true, 00:05:38.396 "write": true, 00:05:38.396 "unmap": true, 00:05:38.396 "flush": true, 00:05:38.396 "reset": true, 00:05:38.396 "nvme_admin": false, 00:05:38.396 "nvme_io": false, 00:05:38.396 "nvme_io_md": false, 00:05:38.396 "write_zeroes": true, 00:05:38.396 "zcopy": true, 00:05:38.396 "get_zone_info": false, 00:05:38.396 "zone_management": false, 00:05:38.396 "zone_append": false, 00:05:38.396 "compare": false, 00:05:38.396 "compare_and_write": false, 00:05:38.396 "abort": true, 00:05:38.396 "seek_hole": false, 00:05:38.396 "seek_data": false, 00:05:38.396 "copy": true, 00:05:38.396 "nvme_iov_md": false 00:05:38.396 }, 00:05:38.396 "memory_domains": [ 00:05:38.396 { 00:05:38.396 "dma_device_id": "system", 00:05:38.396 "dma_device_type": 1 00:05:38.396 }, 00:05:38.396 { 00:05:38.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.396 "dma_device_type": 2 00:05:38.396 } 00:05:38.396 ], 00:05:38.396 "driver_specific": { 00:05:38.396 "passthru": { 00:05:38.396 "name": "Passthru0", 00:05:38.396 "base_bdev_name": "Malloc2" 00:05:38.396 } 00:05:38.396 } 00:05:38.396 } 00:05:38.396 ]' 00:05:38.396 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:38.656 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:38.656 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:38.656 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.656 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.656 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.656 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:38.656 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.656 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.656 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.656 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:38.656 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.656 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.656 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.656 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:38.656 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:38.656 11:15:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:38.656 00:05:38.656 real 0m0.368s 00:05:38.656 user 0m0.210s 00:05:38.656 sys 0m0.053s 00:05:38.656 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.656 11:15:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.656 ************************************ 00:05:38.656 END TEST rpc_daemon_integrity 00:05:38.656 ************************************ 00:05:38.656 11:15:21 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:38.656 11:15:21 rpc -- rpc/rpc.sh@84 -- # killprocess 56915 00:05:38.656 11:15:21 rpc -- common/autotest_common.sh@954 -- # '[' -z 56915 ']' 00:05:38.656 11:15:21 rpc -- common/autotest_common.sh@958 -- # kill -0 56915 00:05:38.656 11:15:21 rpc -- common/autotest_common.sh@959 -- # uname 00:05:38.656 11:15:21 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.656 11:15:21 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56915 00:05:38.656 11:15:21 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.656 11:15:21 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.656 killing process with pid 56915 00:05:38.656 11:15:21 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56915' 00:05:38.656 11:15:21 rpc -- common/autotest_common.sh@973 -- # kill 56915 00:05:38.656 11:15:21 rpc -- common/autotest_common.sh@978 -- # wait 56915 00:05:41.951 00:05:41.951 real 0m5.521s 00:05:41.951 user 0m6.028s 00:05:41.951 sys 0m0.963s 00:05:41.951 11:15:24 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.951 11:15:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.951 ************************************ 00:05:41.951 END TEST rpc 00:05:41.951 ************************************ 00:05:41.951 11:15:24 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:41.951 11:15:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.951 11:15:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.951 11:15:24 -- common/autotest_common.sh@10 -- # set +x 00:05:41.951 ************************************ 00:05:41.951 START TEST skip_rpc 00:05:41.951 ************************************ 00:05:41.951 11:15:24 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:41.951 * Looking for test storage... 00:05:41.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:41.951 11:15:24 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.951 11:15:24 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.951 11:15:24 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.951 11:15:24 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.951 11:15:24 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:41.951 11:15:24 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.951 11:15:24 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.951 --rc genhtml_branch_coverage=1 00:05:41.951 --rc genhtml_function_coverage=1 00:05:41.951 --rc genhtml_legend=1 00:05:41.951 --rc geninfo_all_blocks=1 00:05:41.951 --rc geninfo_unexecuted_blocks=1 00:05:41.951 00:05:41.951 ' 00:05:41.951 11:15:24 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.951 --rc genhtml_branch_coverage=1 00:05:41.951 --rc genhtml_function_coverage=1 00:05:41.951 --rc genhtml_legend=1 00:05:41.951 --rc geninfo_all_blocks=1 00:05:41.951 --rc geninfo_unexecuted_blocks=1 00:05:41.951 00:05:41.951 ' 00:05:41.951 11:15:24 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.952 --rc genhtml_branch_coverage=1 00:05:41.952 --rc genhtml_function_coverage=1 00:05:41.952 --rc genhtml_legend=1 00:05:41.952 --rc geninfo_all_blocks=1 00:05:41.952 --rc geninfo_unexecuted_blocks=1 00:05:41.952 00:05:41.952 ' 00:05:41.952 11:15:24 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.952 --rc genhtml_branch_coverage=1 00:05:41.952 --rc genhtml_function_coverage=1 00:05:41.952 --rc genhtml_legend=1 00:05:41.952 --rc geninfo_all_blocks=1 00:05:41.952 --rc geninfo_unexecuted_blocks=1 00:05:41.952 00:05:41.952 ' 00:05:41.952 11:15:24 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:41.952 11:15:24 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:41.952 11:15:24 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:41.952 11:15:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.952 11:15:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.952 11:15:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.952 ************************************ 00:05:41.952 START TEST skip_rpc 00:05:41.952 ************************************ 00:05:41.952 11:15:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:41.952 11:15:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57144 00:05:41.952 11:15:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:41.952 11:15:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.952 11:15:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:41.952 [2024-11-20 11:15:24.720824] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:05:41.952 [2024-11-20 11:15:24.720944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57144 ] 00:05:41.952 [2024-11-20 11:15:24.894826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.952 [2024-11-20 11:15:25.022100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57144 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57144 ']' 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57144 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57144 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.235 killing process with pid 57144 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57144' 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57144 00:05:47.235 11:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57144 00:05:49.142 00:05:49.142 real 0m7.545s 00:05:49.142 user 0m7.100s 00:05:49.142 sys 0m0.369s 00:05:49.142 11:15:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.142 11:15:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.142 ************************************ 00:05:49.142 END TEST skip_rpc 00:05:49.142 ************************************ 00:05:49.142 11:15:32 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:49.142 11:15:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.142 11:15:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.142 11:15:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.142 ************************************ 00:05:49.142 START TEST skip_rpc_with_json 00:05:49.142 ************************************ 00:05:49.142 11:15:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:49.142 11:15:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:49.142 11:15:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57259 00:05:49.142 11:15:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.142 11:15:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.142 11:15:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57259 00:05:49.142 11:15:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57259 ']' 00:05:49.142 11:15:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.142 11:15:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.142 11:15:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.142 11:15:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.142 11:15:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.401 [2024-11-20 11:15:32.331809] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:05:49.401 [2024-11-20 11:15:32.331933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57259 ] 00:05:49.401 [2024-11-20 11:15:32.484680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.660 [2024-11-20 11:15:32.592886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.599 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.599 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:50.599 11:15:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:50.599 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.599 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.599 [2024-11-20 11:15:33.469705] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:50.599 request: 00:05:50.599 { 00:05:50.599 "trtype": "tcp", 00:05:50.599 "method": "nvmf_get_transports", 00:05:50.599 "req_id": 1 00:05:50.599 } 00:05:50.599 Got JSON-RPC error response 00:05:50.599 response: 00:05:50.599 { 00:05:50.599 "code": -19, 00:05:50.599 "message": "No such device" 00:05:50.599 } 00:05:50.599 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:50.599 11:15:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:50.599 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.599 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.599 [2024-11-20 11:15:33.481804] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:50.599 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.599 11:15:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:50.599 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.599 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.599 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.599 11:15:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:50.599 { 00:05:50.599 "subsystems": [ 00:05:50.599 { 00:05:50.599 "subsystem": "fsdev", 00:05:50.599 "config": [ 00:05:50.599 { 00:05:50.599 "method": "fsdev_set_opts", 00:05:50.599 "params": { 00:05:50.599 "fsdev_io_pool_size": 65535, 00:05:50.599 "fsdev_io_cache_size": 256 00:05:50.599 } 00:05:50.599 } 00:05:50.599 ] 00:05:50.599 }, 00:05:50.599 { 00:05:50.599 "subsystem": "keyring", 00:05:50.599 "config": [] 00:05:50.599 }, 00:05:50.599 { 00:05:50.599 "subsystem": "iobuf", 00:05:50.599 "config": [ 00:05:50.599 { 00:05:50.599 "method": "iobuf_set_options", 00:05:50.600 "params": { 00:05:50.600 "small_pool_count": 8192, 00:05:50.600 "large_pool_count": 1024, 00:05:50.600 "small_bufsize": 8192, 00:05:50.600 "large_bufsize": 135168, 00:05:50.600 "enable_numa": false 00:05:50.600 } 00:05:50.600 } 00:05:50.600 ] 00:05:50.600 }, 00:05:50.600 { 00:05:50.600 "subsystem": "sock", 00:05:50.600 "config": [ 00:05:50.600 { 00:05:50.600 "method": "sock_set_default_impl", 00:05:50.600 "params": { 00:05:50.600 "impl_name": "posix" 00:05:50.600 } 00:05:50.600 }, 00:05:50.600 { 00:05:50.600 "method": "sock_impl_set_options", 00:05:50.600 "params": { 00:05:50.600 "impl_name": "ssl", 00:05:50.600 "recv_buf_size": 4096, 00:05:50.600 "send_buf_size": 4096, 00:05:50.600 "enable_recv_pipe": true, 00:05:50.600 "enable_quickack": false, 00:05:50.600 "enable_placement_id": 0, 00:05:50.600 "enable_zerocopy_send_server": true, 00:05:50.600 "enable_zerocopy_send_client": false, 00:05:50.600 "zerocopy_threshold": 0, 00:05:50.600 "tls_version": 0, 00:05:50.600 "enable_ktls": false 00:05:50.600 } 00:05:50.600 }, 00:05:50.600 { 00:05:50.600 "method": "sock_impl_set_options", 00:05:50.600 "params": { 00:05:50.600 "impl_name": "posix", 00:05:50.600 "recv_buf_size": 2097152, 00:05:50.600 "send_buf_size": 2097152, 00:05:50.600 "enable_recv_pipe": true, 00:05:50.600 "enable_quickack": false, 00:05:50.600 "enable_placement_id": 0, 00:05:50.600 "enable_zerocopy_send_server": true, 00:05:50.600 "enable_zerocopy_send_client": false, 00:05:50.600 "zerocopy_threshold": 0, 00:05:50.600 "tls_version": 0, 00:05:50.600 "enable_ktls": false 00:05:50.600 } 00:05:50.600 } 00:05:50.600 ] 00:05:50.600 }, 00:05:50.600 { 00:05:50.600 "subsystem": "vmd", 00:05:50.600 "config": [] 00:05:50.600 }, 00:05:50.600 { 00:05:50.600 "subsystem": "accel", 00:05:50.600 "config": [ 00:05:50.600 { 00:05:50.600 "method": "accel_set_options", 00:05:50.600 "params": { 00:05:50.600 "small_cache_size": 128, 00:05:50.600 "large_cache_size": 16, 00:05:50.600 "task_count": 2048, 00:05:50.600 "sequence_count": 2048, 00:05:50.600 "buf_count": 2048 00:05:50.600 } 00:05:50.600 } 00:05:50.600 ] 00:05:50.600 }, 00:05:50.600 { 00:05:50.600 "subsystem": "bdev", 00:05:50.600 "config": [ 00:05:50.600 { 00:05:50.600 "method": "bdev_set_options", 00:05:50.600 "params": { 00:05:50.600 "bdev_io_pool_size": 65535, 00:05:50.600 "bdev_io_cache_size": 256, 00:05:50.600 "bdev_auto_examine": true, 00:05:50.600 "iobuf_small_cache_size": 128, 00:05:50.600 "iobuf_large_cache_size": 16 00:05:50.600 } 00:05:50.600 }, 00:05:50.600 { 00:05:50.600 "method": "bdev_raid_set_options", 00:05:50.600 "params": { 00:05:50.600 "process_window_size_kb": 1024, 00:05:50.600 "process_max_bandwidth_mb_sec": 0 00:05:50.600 } 00:05:50.600 }, 00:05:50.600 { 00:05:50.600 "method": "bdev_iscsi_set_options", 00:05:50.600 "params": { 00:05:50.600 "timeout_sec": 30 00:05:50.600 } 00:05:50.600 }, 00:05:50.600 { 00:05:50.600 "method": "bdev_nvme_set_options", 00:05:50.600 "params": { 00:05:50.600 "action_on_timeout": "none", 00:05:50.600 "timeout_us": 0, 00:05:50.600 "timeout_admin_us": 0, 00:05:50.600 "keep_alive_timeout_ms": 10000, 00:05:50.600 "arbitration_burst": 0, 00:05:50.600 "low_priority_weight": 0, 00:05:50.600 "medium_priority_weight": 0, 00:05:50.600 "high_priority_weight": 0, 00:05:50.600 "nvme_adminq_poll_period_us": 10000, 00:05:50.600 "nvme_ioq_poll_period_us": 0, 00:05:50.600 "io_queue_requests": 0, 00:05:50.600 "delay_cmd_submit": true, 00:05:50.600 "transport_retry_count": 4, 00:05:50.600 "bdev_retry_count": 3, 00:05:50.600 "transport_ack_timeout": 0, 00:05:50.600 "ctrlr_loss_timeout_sec": 0, 00:05:50.600 "reconnect_delay_sec": 0, 00:05:50.600 "fast_io_fail_timeout_sec": 0, 00:05:50.600 "disable_auto_failback": false, 00:05:50.600 "generate_uuids": false, 00:05:50.600 "transport_tos": 0, 00:05:50.600 "nvme_error_stat": false, 00:05:50.600 "rdma_srq_size": 0, 00:05:50.600 "io_path_stat": false, 00:05:50.600 "allow_accel_sequence": false, 00:05:50.600 "rdma_max_cq_size": 0, 00:05:50.600 "rdma_cm_event_timeout_ms": 0, 00:05:50.600 "dhchap_digests": [ 00:05:50.600 "sha256", 00:05:50.600 "sha384", 00:05:50.600 "sha512" 00:05:50.600 ], 00:05:50.600 "dhchap_dhgroups": [ 00:05:50.600 "null", 00:05:50.600 "ffdhe2048", 00:05:50.600 "ffdhe3072", 00:05:50.600 "ffdhe4096", 00:05:50.600 "ffdhe6144", 00:05:50.600 "ffdhe8192" 00:05:50.600 ] 00:05:50.600 } 00:05:50.600 }, 00:05:50.600 { 00:05:50.600 "method": "bdev_nvme_set_hotplug", 00:05:50.600 "params": { 00:05:50.600 "period_us": 100000, 00:05:50.600 "enable": false 00:05:50.600 } 00:05:50.600 }, 00:05:50.600 { 00:05:50.600 "method": "bdev_wait_for_examine" 00:05:50.600 } 00:05:50.600 ] 00:05:50.600 }, 00:05:50.600 { 00:05:50.600 "subsystem": "scsi", 00:05:50.600 "config": null 00:05:50.600 }, 00:05:50.600 { 00:05:50.600 "subsystem": "scheduler", 00:05:50.600 "config": [ 00:05:50.600 { 00:05:50.600 "method": "framework_set_scheduler", 00:05:50.600 "params": { 00:05:50.600 "name": "static" 00:05:50.600 } 00:05:50.600 } 00:05:50.600 ] 00:05:50.600 }, 00:05:50.600 { 00:05:50.600 "subsystem": "vhost_scsi", 00:05:50.600 "config": [] 00:05:50.600 }, 00:05:50.600 { 00:05:50.600 "subsystem": "vhost_blk", 00:05:50.600 "config": [] 00:05:50.600 }, 00:05:50.600 { 00:05:50.600 "subsystem": "ublk", 00:05:50.600 "config": [] 00:05:50.600 }, 00:05:50.600 { 00:05:50.600 "subsystem": "nbd", 00:05:50.600 "config": [] 00:05:50.600 }, 00:05:50.600 { 00:05:50.600 "subsystem": "nvmf", 00:05:50.600 "config": [ 00:05:50.600 { 00:05:50.600 "method": "nvmf_set_config", 00:05:50.600 "params": { 00:05:50.600 "discovery_filter": "match_any", 00:05:50.600 "admin_cmd_passthru": { 00:05:50.600 "identify_ctrlr": false 00:05:50.600 }, 00:05:50.600 "dhchap_digests": [ 00:05:50.600 "sha256", 00:05:50.600 "sha384", 00:05:50.600 "sha512" 00:05:50.600 ], 00:05:50.600 "dhchap_dhgroups": [ 00:05:50.600 "null", 00:05:50.600 "ffdhe2048", 00:05:50.600 "ffdhe3072", 00:05:50.600 "ffdhe4096", 00:05:50.600 "ffdhe6144", 00:05:50.600 "ffdhe8192" 00:05:50.600 ] 00:05:50.600 } 00:05:50.600 }, 00:05:50.600 { 00:05:50.600 "method": "nvmf_set_max_subsystems", 00:05:50.600 "params": { 00:05:50.600 "max_subsystems": 1024 00:05:50.600 } 00:05:50.600 }, 00:05:50.600 { 00:05:50.600 "method": "nvmf_set_crdt", 00:05:50.600 "params": { 00:05:50.600 "crdt1": 0, 00:05:50.600 "crdt2": 0, 00:05:50.600 "crdt3": 0 00:05:50.600 } 00:05:50.600 }, 00:05:50.600 { 00:05:50.600 "method": "nvmf_create_transport", 00:05:50.600 "params": { 00:05:50.600 "trtype": "TCP", 00:05:50.600 "max_queue_depth": 128, 00:05:50.600 "max_io_qpairs_per_ctrlr": 127, 00:05:50.600 "in_capsule_data_size": 4096, 00:05:50.600 "max_io_size": 131072, 00:05:50.600 "io_unit_size": 131072, 00:05:50.600 "max_aq_depth": 128, 00:05:50.600 "num_shared_buffers": 511, 00:05:50.600 "buf_cache_size": 4294967295, 00:05:50.600 "dif_insert_or_strip": false, 00:05:50.600 "zcopy": false, 00:05:50.600 "c2h_success": true, 00:05:50.600 "sock_priority": 0, 00:05:50.600 "abort_timeout_sec": 1, 00:05:50.600 "ack_timeout": 0, 00:05:50.600 "data_wr_pool_size": 0 00:05:50.600 } 00:05:50.600 } 00:05:50.600 ] 00:05:50.600 }, 00:05:50.600 { 00:05:50.600 "subsystem": "iscsi", 00:05:50.600 "config": [ 00:05:50.600 { 00:05:50.600 "method": "iscsi_set_options", 00:05:50.600 "params": { 00:05:50.600 "node_base": "iqn.2016-06.io.spdk", 00:05:50.600 "max_sessions": 128, 00:05:50.600 "max_connections_per_session": 2, 00:05:50.600 "max_queue_depth": 64, 00:05:50.600 "default_time2wait": 2, 00:05:50.600 "default_time2retain": 20, 00:05:50.600 "first_burst_length": 8192, 00:05:50.600 "immediate_data": true, 00:05:50.600 "allow_duplicated_isid": false, 00:05:50.600 "error_recovery_level": 0, 00:05:50.600 "nop_timeout": 60, 00:05:50.600 "nop_in_interval": 30, 00:05:50.600 "disable_chap": false, 00:05:50.600 "require_chap": false, 00:05:50.600 "mutual_chap": false, 00:05:50.600 "chap_group": 0, 00:05:50.600 "max_large_datain_per_connection": 64, 00:05:50.600 "max_r2t_per_connection": 4, 00:05:50.600 "pdu_pool_size": 36864, 00:05:50.600 "immediate_data_pool_size": 16384, 00:05:50.600 "data_out_pool_size": 2048 00:05:50.600 } 00:05:50.600 } 00:05:50.600 ] 00:05:50.600 } 00:05:50.600 ] 00:05:50.600 } 00:05:50.600 11:15:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:50.600 11:15:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57259 00:05:50.600 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57259 ']' 00:05:50.600 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57259 00:05:50.601 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:50.601 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.601 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57259 00:05:50.601 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.601 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.601 killing process with pid 57259 00:05:50.601 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57259' 00:05:50.601 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57259 00:05:50.601 11:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57259 00:05:53.140 11:15:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57304 00:05:53.140 11:15:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:53.140 11:15:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:58.415 11:15:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57304 00:05:58.415 11:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57304 ']' 00:05:58.415 11:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57304 00:05:58.415 11:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:58.415 11:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.415 11:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57304 00:05:58.415 11:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.415 11:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.415 killing process with pid 57304 00:05:58.415 11:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57304' 00:05:58.415 11:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57304 00:05:58.415 11:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57304 00:06:00.954 11:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:00.954 11:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:00.954 00:06:00.954 real 0m11.305s 00:06:00.954 user 0m10.720s 00:06:00.954 sys 0m0.869s 00:06:00.954 11:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.954 11:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.954 ************************************ 00:06:00.954 END TEST skip_rpc_with_json 00:06:00.954 ************************************ 00:06:00.954 11:15:43 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:00.954 11:15:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.954 11:15:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.954 11:15:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.954 ************************************ 00:06:00.954 START TEST skip_rpc_with_delay 00:06:00.954 ************************************ 00:06:00.954 11:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:00.954 11:15:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.954 11:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:00.954 11:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.954 11:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.954 11:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.954 11:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.955 11:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.955 11:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.955 11:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.955 11:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.955 11:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:00.955 11:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.955 [2024-11-20 11:15:43.709161] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:00.955 11:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:00.955 11:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:00.955 11:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:00.955 11:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:00.955 00:06:00.955 real 0m0.173s 00:06:00.955 user 0m0.087s 00:06:00.955 sys 0m0.084s 00:06:00.955 11:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.955 11:15:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:00.955 ************************************ 00:06:00.955 END TEST skip_rpc_with_delay 00:06:00.955 ************************************ 00:06:00.955 11:15:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:00.955 11:15:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:00.955 11:15:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:00.955 11:15:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.955 11:15:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.955 11:15:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.955 ************************************ 00:06:00.955 START TEST exit_on_failed_rpc_init 00:06:00.955 ************************************ 00:06:00.955 11:15:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:00.955 11:15:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57443 00:06:00.955 11:15:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.955 11:15:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57443 00:06:00.955 11:15:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57443 ']' 00:06:00.955 11:15:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.955 11:15:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.955 11:15:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.955 11:15:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.955 11:15:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:00.955 [2024-11-20 11:15:43.946579] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:06:00.955 [2024-11-20 11:15:43.946715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57443 ] 00:06:01.215 [2024-11-20 11:15:44.126932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.215 [2024-11-20 11:15:44.280183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.597 11:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.597 11:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:02.597 11:15:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.597 11:15:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.597 11:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:02.597 11:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.597 11:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.597 11:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.597 11:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.597 11:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.597 11:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.597 11:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.597 11:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.597 11:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:02.597 11:15:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.597 [2024-11-20 11:15:45.478844] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:06:02.597 [2024-11-20 11:15:45.479001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57461 ] 00:06:02.597 [2024-11-20 11:15:45.663270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.857 [2024-11-20 11:15:45.808068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.857 [2024-11-20 11:15:45.808194] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:02.857 [2024-11-20 11:15:45.808210] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:02.857 [2024-11-20 11:15:45.808226] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:03.117 11:15:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:03.117 11:15:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:03.117 11:15:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:03.117 11:15:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:03.117 11:15:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:03.117 11:15:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:03.117 11:15:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:03.117 11:15:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57443 00:06:03.117 11:15:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57443 ']' 00:06:03.117 11:15:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57443 00:06:03.117 11:15:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:03.117 11:15:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.117 11:15:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57443 00:06:03.117 11:15:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.117 11:15:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.117 killing process with pid 57443 00:06:03.117 11:15:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57443' 00:06:03.117 11:15:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57443 00:06:03.117 11:15:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57443 00:06:05.657 00:06:05.657 real 0m4.659s 00:06:05.657 user 0m4.918s 00:06:05.657 sys 0m0.726s 00:06:05.657 11:15:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.657 11:15:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:05.657 ************************************ 00:06:05.657 END TEST exit_on_failed_rpc_init 00:06:05.657 ************************************ 00:06:05.657 11:15:48 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:05.657 00:06:05.657 real 0m24.192s 00:06:05.657 user 0m23.035s 00:06:05.657 sys 0m2.361s 00:06:05.657 11:15:48 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.657 11:15:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.657 ************************************ 00:06:05.657 END TEST skip_rpc 00:06:05.657 ************************************ 00:06:05.657 11:15:48 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:05.657 11:15:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.657 11:15:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.657 11:15:48 -- common/autotest_common.sh@10 -- # set +x 00:06:05.657 ************************************ 00:06:05.657 START TEST rpc_client 00:06:05.657 ************************************ 00:06:05.657 11:15:48 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:05.657 * Looking for test storage... 00:06:05.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:05.657 11:15:48 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:05.657 11:15:48 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:05.657 11:15:48 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:05.916 11:15:48 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:05.916 11:15:48 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.916 11:15:48 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.916 11:15:48 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.916 11:15:48 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.916 11:15:48 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.916 11:15:48 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.916 11:15:48 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.916 11:15:48 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.916 11:15:48 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.916 11:15:48 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.916 11:15:48 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.916 11:15:48 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:05.916 11:15:48 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:05.916 11:15:48 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.916 11:15:48 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.916 11:15:48 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:05.917 11:15:48 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:05.917 11:15:48 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.917 11:15:48 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:05.917 11:15:48 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.917 11:15:48 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:05.917 11:15:48 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:05.917 11:15:48 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.917 11:15:48 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:05.917 11:15:48 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.917 11:15:48 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.917 11:15:48 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.917 11:15:48 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:05.917 11:15:48 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.917 11:15:48 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:05.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.917 --rc genhtml_branch_coverage=1 00:06:05.917 --rc genhtml_function_coverage=1 00:06:05.917 --rc genhtml_legend=1 00:06:05.917 --rc geninfo_all_blocks=1 00:06:05.917 --rc geninfo_unexecuted_blocks=1 00:06:05.917 00:06:05.917 ' 00:06:05.917 11:15:48 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:05.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.917 --rc genhtml_branch_coverage=1 00:06:05.917 --rc genhtml_function_coverage=1 00:06:05.917 --rc genhtml_legend=1 00:06:05.917 --rc geninfo_all_blocks=1 00:06:05.917 --rc geninfo_unexecuted_blocks=1 00:06:05.917 00:06:05.917 ' 00:06:05.917 11:15:48 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:05.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.917 --rc genhtml_branch_coverage=1 00:06:05.917 --rc genhtml_function_coverage=1 00:06:05.917 --rc genhtml_legend=1 00:06:05.917 --rc geninfo_all_blocks=1 00:06:05.917 --rc geninfo_unexecuted_blocks=1 00:06:05.917 00:06:05.917 ' 00:06:05.917 11:15:48 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:05.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.917 --rc genhtml_branch_coverage=1 00:06:05.917 --rc genhtml_function_coverage=1 00:06:05.917 --rc genhtml_legend=1 00:06:05.917 --rc geninfo_all_blocks=1 00:06:05.917 --rc geninfo_unexecuted_blocks=1 00:06:05.917 00:06:05.917 ' 00:06:05.917 11:15:48 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:05.917 OK 00:06:05.917 11:15:48 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:05.917 00:06:05.917 real 0m0.310s 00:06:05.917 user 0m0.168s 00:06:05.917 sys 0m0.159s 00:06:05.917 11:15:48 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.917 11:15:48 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:05.917 ************************************ 00:06:05.917 END TEST rpc_client 00:06:05.917 ************************************ 00:06:05.917 11:15:49 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:05.917 11:15:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.917 11:15:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.917 11:15:49 -- common/autotest_common.sh@10 -- # set +x 00:06:05.917 ************************************ 00:06:05.917 START TEST json_config 00:06:05.917 ************************************ 00:06:05.917 11:15:49 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:06.182 11:15:49 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.182 11:15:49 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.182 11:15:49 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.182 11:15:49 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.182 11:15:49 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.182 11:15:49 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.182 11:15:49 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.182 11:15:49 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.182 11:15:49 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.182 11:15:49 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.182 11:15:49 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.182 11:15:49 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.182 11:15:49 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.182 11:15:49 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.182 11:15:49 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.182 11:15:49 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:06.182 11:15:49 json_config -- scripts/common.sh@345 -- # : 1 00:06:06.182 11:15:49 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.182 11:15:49 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.182 11:15:49 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:06.182 11:15:49 json_config -- scripts/common.sh@353 -- # local d=1 00:06:06.182 11:15:49 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.182 11:15:49 json_config -- scripts/common.sh@355 -- # echo 1 00:06:06.182 11:15:49 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.182 11:15:49 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:06.182 11:15:49 json_config -- scripts/common.sh@353 -- # local d=2 00:06:06.182 11:15:49 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.182 11:15:49 json_config -- scripts/common.sh@355 -- # echo 2 00:06:06.182 11:15:49 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.182 11:15:49 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.182 11:15:49 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.182 11:15:49 json_config -- scripts/common.sh@368 -- # return 0 00:06:06.182 11:15:49 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.182 11:15:49 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.183 --rc genhtml_branch_coverage=1 00:06:06.183 --rc genhtml_function_coverage=1 00:06:06.183 --rc genhtml_legend=1 00:06:06.183 --rc geninfo_all_blocks=1 00:06:06.183 --rc geninfo_unexecuted_blocks=1 00:06:06.183 00:06:06.183 ' 00:06:06.183 11:15:49 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.183 --rc genhtml_branch_coverage=1 00:06:06.183 --rc genhtml_function_coverage=1 00:06:06.183 --rc genhtml_legend=1 00:06:06.183 --rc geninfo_all_blocks=1 00:06:06.183 --rc geninfo_unexecuted_blocks=1 00:06:06.183 00:06:06.183 ' 00:06:06.183 11:15:49 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.183 --rc genhtml_branch_coverage=1 00:06:06.183 --rc genhtml_function_coverage=1 00:06:06.183 --rc genhtml_legend=1 00:06:06.183 --rc geninfo_all_blocks=1 00:06:06.183 --rc geninfo_unexecuted_blocks=1 00:06:06.183 00:06:06.183 ' 00:06:06.183 11:15:49 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.183 --rc genhtml_branch_coverage=1 00:06:06.183 --rc genhtml_function_coverage=1 00:06:06.183 --rc genhtml_legend=1 00:06:06.183 --rc geninfo_all_blocks=1 00:06:06.183 --rc geninfo_unexecuted_blocks=1 00:06:06.183 00:06:06.183 ' 00:06:06.183 11:15:49 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1b4f2fe2-87f8-4e8d-9e38-efdfeac62c69 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=1b4f2fe2-87f8-4e8d-9e38-efdfeac62c69 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:06.183 11:15:49 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:06.183 11:15:49 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.183 11:15:49 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.183 11:15:49 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.183 11:15:49 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.183 11:15:49 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.183 11:15:49 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.183 11:15:49 json_config -- paths/export.sh@5 -- # export PATH 00:06:06.183 11:15:49 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@51 -- # : 0 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:06.183 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:06.183 11:15:49 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:06.183 11:15:49 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:06.183 11:15:49 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:06.183 11:15:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:06.184 11:15:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:06.184 11:15:49 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:06.184 WARNING: No tests are enabled so not running JSON configuration tests 00:06:06.184 11:15:49 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:06.184 11:15:49 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:06.184 00:06:06.184 real 0m0.227s 00:06:06.184 user 0m0.135s 00:06:06.184 sys 0m0.100s 00:06:06.184 11:15:49 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.184 11:15:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.184 ************************************ 00:06:06.184 END TEST json_config 00:06:06.184 ************************************ 00:06:06.184 11:15:49 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:06.184 11:15:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.453 11:15:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.453 11:15:49 -- common/autotest_common.sh@10 -- # set +x 00:06:06.453 ************************************ 00:06:06.453 START TEST json_config_extra_key 00:06:06.453 ************************************ 00:06:06.453 11:15:49 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:06.453 11:15:49 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.453 11:15:49 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.453 11:15:49 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.453 11:15:49 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.453 11:15:49 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:06.453 11:15:49 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.453 11:15:49 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.453 --rc genhtml_branch_coverage=1 00:06:06.453 --rc genhtml_function_coverage=1 00:06:06.453 --rc genhtml_legend=1 00:06:06.453 --rc geninfo_all_blocks=1 00:06:06.453 --rc geninfo_unexecuted_blocks=1 00:06:06.454 00:06:06.454 ' 00:06:06.454 11:15:49 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.454 --rc genhtml_branch_coverage=1 00:06:06.454 --rc genhtml_function_coverage=1 00:06:06.454 --rc genhtml_legend=1 00:06:06.454 --rc geninfo_all_blocks=1 00:06:06.454 --rc geninfo_unexecuted_blocks=1 00:06:06.454 00:06:06.454 ' 00:06:06.454 11:15:49 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.454 --rc genhtml_branch_coverage=1 00:06:06.454 --rc genhtml_function_coverage=1 00:06:06.454 --rc genhtml_legend=1 00:06:06.454 --rc geninfo_all_blocks=1 00:06:06.454 --rc geninfo_unexecuted_blocks=1 00:06:06.454 00:06:06.454 ' 00:06:06.454 11:15:49 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.454 --rc genhtml_branch_coverage=1 00:06:06.454 --rc genhtml_function_coverage=1 00:06:06.454 --rc genhtml_legend=1 00:06:06.454 --rc geninfo_all_blocks=1 00:06:06.454 --rc geninfo_unexecuted_blocks=1 00:06:06.454 00:06:06.454 ' 00:06:06.454 11:15:49 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1b4f2fe2-87f8-4e8d-9e38-efdfeac62c69 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=1b4f2fe2-87f8-4e8d-9e38-efdfeac62c69 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:06.454 11:15:49 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:06.454 11:15:49 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.454 11:15:49 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.454 11:15:49 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.454 11:15:49 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.454 11:15:49 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.454 11:15:49 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.454 11:15:49 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:06.454 11:15:49 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:06.454 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:06.454 11:15:49 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:06.454 11:15:49 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:06.454 11:15:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:06.454 11:15:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:06.454 11:15:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:06.454 11:15:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:06.454 11:15:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:06.454 11:15:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:06.454 11:15:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:06.454 11:15:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:06.454 11:15:49 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:06.454 INFO: launching applications... 00:06:06.454 11:15:49 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:06.454 11:15:49 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:06.454 11:15:49 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:06.454 11:15:49 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:06.454 11:15:49 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:06.454 11:15:49 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:06.454 11:15:49 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:06.454 11:15:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.454 11:15:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.454 11:15:49 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57671 00:06:06.454 Waiting for target to run... 00:06:06.454 11:15:49 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:06.454 11:15:49 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57671 /var/tmp/spdk_tgt.sock 00:06:06.454 11:15:49 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57671 ']' 00:06:06.454 11:15:49 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:06.454 11:15:49 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:06.454 11:15:49 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:06.454 11:15:49 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:06.454 11:15:49 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.454 11:15:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:06.714 [2024-11-20 11:15:49.649461] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:06:06.714 [2024-11-20 11:15:49.649623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57671 ] 00:06:07.282 [2024-11-20 11:15:50.227901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.282 [2024-11-20 11:15:50.334224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.218 11:15:51 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.218 11:15:51 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:08.218 00:06:08.218 11:15:51 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:08.218 INFO: shutting down applications... 00:06:08.218 11:15:51 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:08.218 11:15:51 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:08.218 11:15:51 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:08.218 11:15:51 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:08.218 11:15:51 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57671 ]] 00:06:08.218 11:15:51 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57671 00:06:08.218 11:15:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:08.218 11:15:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.218 11:15:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57671 00:06:08.218 11:15:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:08.786 11:15:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:08.787 11:15:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.787 11:15:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57671 00:06:08.787 11:15:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:09.046 11:15:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:09.046 11:15:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.046 11:15:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57671 00:06:09.046 11:15:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:09.615 11:15:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:09.615 11:15:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.615 11:15:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57671 00:06:09.615 11:15:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:10.185 11:15:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:10.185 11:15:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.185 11:15:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57671 00:06:10.185 11:15:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:10.754 11:15:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:10.754 11:15:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.754 11:15:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57671 00:06:10.754 11:15:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:11.325 11:15:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:11.325 11:15:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.325 11:15:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57671 00:06:11.325 11:15:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:11.325 11:15:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:11.325 11:15:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:11.325 SPDK target shutdown done 00:06:11.325 11:15:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:11.325 Success 00:06:11.325 11:15:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:11.325 00:06:11.325 real 0m4.828s 00:06:11.325 user 0m4.229s 00:06:11.325 sys 0m0.795s 00:06:11.325 11:15:54 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.325 11:15:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:11.325 ************************************ 00:06:11.325 END TEST json_config_extra_key 00:06:11.325 ************************************ 00:06:11.325 11:15:54 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:11.325 11:15:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.325 11:15:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.325 11:15:54 -- common/autotest_common.sh@10 -- # set +x 00:06:11.325 ************************************ 00:06:11.325 START TEST alias_rpc 00:06:11.325 ************************************ 00:06:11.325 11:15:54 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:11.325 * Looking for test storage... 00:06:11.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:11.325 11:15:54 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:11.325 11:15:54 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:11.325 11:15:54 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:11.325 11:15:54 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:11.325 11:15:54 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.325 11:15:54 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.325 11:15:54 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.326 11:15:54 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.326 11:15:54 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.326 11:15:54 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.326 11:15:54 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.326 11:15:54 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.326 11:15:54 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.326 11:15:54 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.326 11:15:54 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.326 11:15:54 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:11.326 11:15:54 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:11.326 11:15:54 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.326 11:15:54 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.326 11:15:54 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:11.326 11:15:54 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:11.326 11:15:54 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.326 11:15:54 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:11.326 11:15:54 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.326 11:15:54 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:11.326 11:15:54 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:11.326 11:15:54 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.326 11:15:54 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:11.326 11:15:54 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.326 11:15:54 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.326 11:15:54 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.326 11:15:54 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:11.326 11:15:54 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.326 11:15:54 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:11.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.326 --rc genhtml_branch_coverage=1 00:06:11.326 --rc genhtml_function_coverage=1 00:06:11.326 --rc genhtml_legend=1 00:06:11.326 --rc geninfo_all_blocks=1 00:06:11.326 --rc geninfo_unexecuted_blocks=1 00:06:11.326 00:06:11.326 ' 00:06:11.326 11:15:54 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:11.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.326 --rc genhtml_branch_coverage=1 00:06:11.326 --rc genhtml_function_coverage=1 00:06:11.326 --rc genhtml_legend=1 00:06:11.326 --rc geninfo_all_blocks=1 00:06:11.326 --rc geninfo_unexecuted_blocks=1 00:06:11.326 00:06:11.326 ' 00:06:11.326 11:15:54 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:11.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.326 --rc genhtml_branch_coverage=1 00:06:11.326 --rc genhtml_function_coverage=1 00:06:11.326 --rc genhtml_legend=1 00:06:11.326 --rc geninfo_all_blocks=1 00:06:11.326 --rc geninfo_unexecuted_blocks=1 00:06:11.326 00:06:11.326 ' 00:06:11.326 11:15:54 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:11.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.326 --rc genhtml_branch_coverage=1 00:06:11.326 --rc genhtml_function_coverage=1 00:06:11.326 --rc genhtml_legend=1 00:06:11.326 --rc geninfo_all_blocks=1 00:06:11.326 --rc geninfo_unexecuted_blocks=1 00:06:11.326 00:06:11.326 ' 00:06:11.326 11:15:54 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:11.326 11:15:54 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57788 00:06:11.326 11:15:54 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:11.326 11:15:54 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57788 00:06:11.326 11:15:54 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57788 ']' 00:06:11.326 11:15:54 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.326 11:15:54 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.326 11:15:54 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.326 11:15:54 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.326 11:15:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.586 [2024-11-20 11:15:54.495573] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:06:11.586 [2024-11-20 11:15:54.495724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57788 ] 00:06:11.586 [2024-11-20 11:15:54.651064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.846 [2024-11-20 11:15:54.770770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.785 11:15:55 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.785 11:15:55 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:12.785 11:15:55 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:13.044 11:15:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57788 00:06:13.044 11:15:55 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57788 ']' 00:06:13.044 11:15:55 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57788 00:06:13.044 11:15:55 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:13.044 11:15:55 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.044 11:15:55 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57788 00:06:13.044 11:15:55 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.044 11:15:55 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.044 killing process with pid 57788 00:06:13.044 11:15:55 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57788' 00:06:13.044 11:15:55 alias_rpc -- common/autotest_common.sh@973 -- # kill 57788 00:06:13.044 11:15:55 alias_rpc -- common/autotest_common.sh@978 -- # wait 57788 00:06:15.582 00:06:15.582 real 0m4.171s 00:06:15.582 user 0m4.198s 00:06:15.582 sys 0m0.566s 00:06:15.582 11:15:58 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.582 11:15:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.582 ************************************ 00:06:15.582 END TEST alias_rpc 00:06:15.582 ************************************ 00:06:15.582 11:15:58 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:15.582 11:15:58 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:15.582 11:15:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.582 11:15:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.582 11:15:58 -- common/autotest_common.sh@10 -- # set +x 00:06:15.582 ************************************ 00:06:15.582 START TEST spdkcli_tcp 00:06:15.582 ************************************ 00:06:15.582 11:15:58 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:15.582 * Looking for test storage... 00:06:15.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:15.582 11:15:58 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:15.582 11:15:58 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:15.582 11:15:58 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:15.582 11:15:58 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.582 11:15:58 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:15.582 11:15:58 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.582 11:15:58 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:15.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.582 --rc genhtml_branch_coverage=1 00:06:15.582 --rc genhtml_function_coverage=1 00:06:15.582 --rc genhtml_legend=1 00:06:15.582 --rc geninfo_all_blocks=1 00:06:15.582 --rc geninfo_unexecuted_blocks=1 00:06:15.582 00:06:15.582 ' 00:06:15.582 11:15:58 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:15.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.582 --rc genhtml_branch_coverage=1 00:06:15.582 --rc genhtml_function_coverage=1 00:06:15.582 --rc genhtml_legend=1 00:06:15.582 --rc geninfo_all_blocks=1 00:06:15.582 --rc geninfo_unexecuted_blocks=1 00:06:15.582 00:06:15.582 ' 00:06:15.582 11:15:58 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:15.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.582 --rc genhtml_branch_coverage=1 00:06:15.582 --rc genhtml_function_coverage=1 00:06:15.582 --rc genhtml_legend=1 00:06:15.582 --rc geninfo_all_blocks=1 00:06:15.582 --rc geninfo_unexecuted_blocks=1 00:06:15.582 00:06:15.582 ' 00:06:15.582 11:15:58 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:15.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.582 --rc genhtml_branch_coverage=1 00:06:15.582 --rc genhtml_function_coverage=1 00:06:15.582 --rc genhtml_legend=1 00:06:15.582 --rc geninfo_all_blocks=1 00:06:15.582 --rc geninfo_unexecuted_blocks=1 00:06:15.582 00:06:15.582 ' 00:06:15.582 11:15:58 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:15.582 11:15:58 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:15.582 11:15:58 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:15.582 11:15:58 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:15.582 11:15:58 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:15.582 11:15:58 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:15.582 11:15:58 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:15.582 11:15:58 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:15.582 11:15:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.582 11:15:58 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57895 00:06:15.582 11:15:58 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:15.582 11:15:58 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57895 00:06:15.582 11:15:58 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57895 ']' 00:06:15.582 11:15:58 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.582 11:15:58 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.582 11:15:58 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.582 11:15:58 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.582 11:15:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.842 [2024-11-20 11:15:58.750172] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:06:15.842 [2024-11-20 11:15:58.750292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57895 ] 00:06:15.842 [2024-11-20 11:15:58.927286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.102 [2024-11-20 11:15:59.049560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.102 [2024-11-20 11:15:59.049597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.041 11:15:59 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.041 11:15:59 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:17.041 11:15:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57912 00:06:17.042 11:15:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:17.042 11:15:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:17.042 [ 00:06:17.042 "bdev_malloc_delete", 00:06:17.042 "bdev_malloc_create", 00:06:17.042 "bdev_null_resize", 00:06:17.042 "bdev_null_delete", 00:06:17.042 "bdev_null_create", 00:06:17.042 "bdev_nvme_cuse_unregister", 00:06:17.042 "bdev_nvme_cuse_register", 00:06:17.042 "bdev_opal_new_user", 00:06:17.042 "bdev_opal_set_lock_state", 00:06:17.042 "bdev_opal_delete", 00:06:17.042 "bdev_opal_get_info", 00:06:17.042 "bdev_opal_create", 00:06:17.042 "bdev_nvme_opal_revert", 00:06:17.042 "bdev_nvme_opal_init", 00:06:17.042 "bdev_nvme_send_cmd", 00:06:17.042 "bdev_nvme_set_keys", 00:06:17.042 "bdev_nvme_get_path_iostat", 00:06:17.042 "bdev_nvme_get_mdns_discovery_info", 00:06:17.042 "bdev_nvme_stop_mdns_discovery", 00:06:17.042 "bdev_nvme_start_mdns_discovery", 00:06:17.042 "bdev_nvme_set_multipath_policy", 00:06:17.042 "bdev_nvme_set_preferred_path", 00:06:17.042 "bdev_nvme_get_io_paths", 00:06:17.042 "bdev_nvme_remove_error_injection", 00:06:17.042 "bdev_nvme_add_error_injection", 00:06:17.042 "bdev_nvme_get_discovery_info", 00:06:17.042 "bdev_nvme_stop_discovery", 00:06:17.042 "bdev_nvme_start_discovery", 00:06:17.042 "bdev_nvme_get_controller_health_info", 00:06:17.042 "bdev_nvme_disable_controller", 00:06:17.042 "bdev_nvme_enable_controller", 00:06:17.042 "bdev_nvme_reset_controller", 00:06:17.042 "bdev_nvme_get_transport_statistics", 00:06:17.042 "bdev_nvme_apply_firmware", 00:06:17.042 "bdev_nvme_detach_controller", 00:06:17.042 "bdev_nvme_get_controllers", 00:06:17.042 "bdev_nvme_attach_controller", 00:06:17.042 "bdev_nvme_set_hotplug", 00:06:17.042 "bdev_nvme_set_options", 00:06:17.042 "bdev_passthru_delete", 00:06:17.042 "bdev_passthru_create", 00:06:17.042 "bdev_lvol_set_parent_bdev", 00:06:17.042 "bdev_lvol_set_parent", 00:06:17.042 "bdev_lvol_check_shallow_copy", 00:06:17.042 "bdev_lvol_start_shallow_copy", 00:06:17.042 "bdev_lvol_grow_lvstore", 00:06:17.042 "bdev_lvol_get_lvols", 00:06:17.042 "bdev_lvol_get_lvstores", 00:06:17.042 "bdev_lvol_delete", 00:06:17.042 "bdev_lvol_set_read_only", 00:06:17.042 "bdev_lvol_resize", 00:06:17.042 "bdev_lvol_decouple_parent", 00:06:17.042 "bdev_lvol_inflate", 00:06:17.042 "bdev_lvol_rename", 00:06:17.042 "bdev_lvol_clone_bdev", 00:06:17.042 "bdev_lvol_clone", 00:06:17.042 "bdev_lvol_snapshot", 00:06:17.042 "bdev_lvol_create", 00:06:17.042 "bdev_lvol_delete_lvstore", 00:06:17.042 "bdev_lvol_rename_lvstore", 00:06:17.042 "bdev_lvol_create_lvstore", 00:06:17.042 "bdev_raid_set_options", 00:06:17.042 "bdev_raid_remove_base_bdev", 00:06:17.042 "bdev_raid_add_base_bdev", 00:06:17.042 "bdev_raid_delete", 00:06:17.042 "bdev_raid_create", 00:06:17.042 "bdev_raid_get_bdevs", 00:06:17.042 "bdev_error_inject_error", 00:06:17.042 "bdev_error_delete", 00:06:17.042 "bdev_error_create", 00:06:17.042 "bdev_split_delete", 00:06:17.042 "bdev_split_create", 00:06:17.042 "bdev_delay_delete", 00:06:17.042 "bdev_delay_create", 00:06:17.042 "bdev_delay_update_latency", 00:06:17.042 "bdev_zone_block_delete", 00:06:17.042 "bdev_zone_block_create", 00:06:17.042 "blobfs_create", 00:06:17.042 "blobfs_detect", 00:06:17.042 "blobfs_set_cache_size", 00:06:17.042 "bdev_aio_delete", 00:06:17.042 "bdev_aio_rescan", 00:06:17.042 "bdev_aio_create", 00:06:17.042 "bdev_ftl_set_property", 00:06:17.042 "bdev_ftl_get_properties", 00:06:17.042 "bdev_ftl_get_stats", 00:06:17.042 "bdev_ftl_unmap", 00:06:17.042 "bdev_ftl_unload", 00:06:17.042 "bdev_ftl_delete", 00:06:17.042 "bdev_ftl_load", 00:06:17.042 "bdev_ftl_create", 00:06:17.042 "bdev_virtio_attach_controller", 00:06:17.042 "bdev_virtio_scsi_get_devices", 00:06:17.042 "bdev_virtio_detach_controller", 00:06:17.042 "bdev_virtio_blk_set_hotplug", 00:06:17.042 "bdev_iscsi_delete", 00:06:17.042 "bdev_iscsi_create", 00:06:17.042 "bdev_iscsi_set_options", 00:06:17.042 "accel_error_inject_error", 00:06:17.042 "ioat_scan_accel_module", 00:06:17.042 "dsa_scan_accel_module", 00:06:17.042 "iaa_scan_accel_module", 00:06:17.042 "keyring_file_remove_key", 00:06:17.042 "keyring_file_add_key", 00:06:17.042 "keyring_linux_set_options", 00:06:17.042 "fsdev_aio_delete", 00:06:17.042 "fsdev_aio_create", 00:06:17.042 "iscsi_get_histogram", 00:06:17.042 "iscsi_enable_histogram", 00:06:17.042 "iscsi_set_options", 00:06:17.042 "iscsi_get_auth_groups", 00:06:17.042 "iscsi_auth_group_remove_secret", 00:06:17.042 "iscsi_auth_group_add_secret", 00:06:17.042 "iscsi_delete_auth_group", 00:06:17.042 "iscsi_create_auth_group", 00:06:17.042 "iscsi_set_discovery_auth", 00:06:17.042 "iscsi_get_options", 00:06:17.042 "iscsi_target_node_request_logout", 00:06:17.042 "iscsi_target_node_set_redirect", 00:06:17.042 "iscsi_target_node_set_auth", 00:06:17.042 "iscsi_target_node_add_lun", 00:06:17.042 "iscsi_get_stats", 00:06:17.042 "iscsi_get_connections", 00:06:17.042 "iscsi_portal_group_set_auth", 00:06:17.042 "iscsi_start_portal_group", 00:06:17.042 "iscsi_delete_portal_group", 00:06:17.042 "iscsi_create_portal_group", 00:06:17.042 "iscsi_get_portal_groups", 00:06:17.042 "iscsi_delete_target_node", 00:06:17.042 "iscsi_target_node_remove_pg_ig_maps", 00:06:17.042 "iscsi_target_node_add_pg_ig_maps", 00:06:17.042 "iscsi_create_target_node", 00:06:17.042 "iscsi_get_target_nodes", 00:06:17.042 "iscsi_delete_initiator_group", 00:06:17.042 "iscsi_initiator_group_remove_initiators", 00:06:17.042 "iscsi_initiator_group_add_initiators", 00:06:17.042 "iscsi_create_initiator_group", 00:06:17.042 "iscsi_get_initiator_groups", 00:06:17.042 "nvmf_set_crdt", 00:06:17.042 "nvmf_set_config", 00:06:17.042 "nvmf_set_max_subsystems", 00:06:17.042 "nvmf_stop_mdns_prr", 00:06:17.042 "nvmf_publish_mdns_prr", 00:06:17.042 "nvmf_subsystem_get_listeners", 00:06:17.042 "nvmf_subsystem_get_qpairs", 00:06:17.042 "nvmf_subsystem_get_controllers", 00:06:17.042 "nvmf_get_stats", 00:06:17.042 "nvmf_get_transports", 00:06:17.042 "nvmf_create_transport", 00:06:17.042 "nvmf_get_targets", 00:06:17.042 "nvmf_delete_target", 00:06:17.042 "nvmf_create_target", 00:06:17.042 "nvmf_subsystem_allow_any_host", 00:06:17.042 "nvmf_subsystem_set_keys", 00:06:17.042 "nvmf_subsystem_remove_host", 00:06:17.042 "nvmf_subsystem_add_host", 00:06:17.042 "nvmf_ns_remove_host", 00:06:17.042 "nvmf_ns_add_host", 00:06:17.042 "nvmf_subsystem_remove_ns", 00:06:17.042 "nvmf_subsystem_set_ns_ana_group", 00:06:17.042 "nvmf_subsystem_add_ns", 00:06:17.042 "nvmf_subsystem_listener_set_ana_state", 00:06:17.042 "nvmf_discovery_get_referrals", 00:06:17.042 "nvmf_discovery_remove_referral", 00:06:17.042 "nvmf_discovery_add_referral", 00:06:17.042 "nvmf_subsystem_remove_listener", 00:06:17.042 "nvmf_subsystem_add_listener", 00:06:17.042 "nvmf_delete_subsystem", 00:06:17.042 "nvmf_create_subsystem", 00:06:17.042 "nvmf_get_subsystems", 00:06:17.042 "env_dpdk_get_mem_stats", 00:06:17.042 "nbd_get_disks", 00:06:17.042 "nbd_stop_disk", 00:06:17.042 "nbd_start_disk", 00:06:17.042 "ublk_recover_disk", 00:06:17.042 "ublk_get_disks", 00:06:17.042 "ublk_stop_disk", 00:06:17.042 "ublk_start_disk", 00:06:17.042 "ublk_destroy_target", 00:06:17.042 "ublk_create_target", 00:06:17.042 "virtio_blk_create_transport", 00:06:17.042 "virtio_blk_get_transports", 00:06:17.042 "vhost_controller_set_coalescing", 00:06:17.042 "vhost_get_controllers", 00:06:17.042 "vhost_delete_controller", 00:06:17.043 "vhost_create_blk_controller", 00:06:17.043 "vhost_scsi_controller_remove_target", 00:06:17.043 "vhost_scsi_controller_add_target", 00:06:17.043 "vhost_start_scsi_controller", 00:06:17.043 "vhost_create_scsi_controller", 00:06:17.043 "thread_set_cpumask", 00:06:17.043 "scheduler_set_options", 00:06:17.043 "framework_get_governor", 00:06:17.043 "framework_get_scheduler", 00:06:17.043 "framework_set_scheduler", 00:06:17.043 "framework_get_reactors", 00:06:17.043 "thread_get_io_channels", 00:06:17.043 "thread_get_pollers", 00:06:17.043 "thread_get_stats", 00:06:17.043 "framework_monitor_context_switch", 00:06:17.043 "spdk_kill_instance", 00:06:17.043 "log_enable_timestamps", 00:06:17.043 "log_get_flags", 00:06:17.043 "log_clear_flag", 00:06:17.043 "log_set_flag", 00:06:17.043 "log_get_level", 00:06:17.043 "log_set_level", 00:06:17.043 "log_get_print_level", 00:06:17.043 "log_set_print_level", 00:06:17.043 "framework_enable_cpumask_locks", 00:06:17.043 "framework_disable_cpumask_locks", 00:06:17.043 "framework_wait_init", 00:06:17.043 "framework_start_init", 00:06:17.043 "scsi_get_devices", 00:06:17.043 "bdev_get_histogram", 00:06:17.043 "bdev_enable_histogram", 00:06:17.043 "bdev_set_qos_limit", 00:06:17.043 "bdev_set_qd_sampling_period", 00:06:17.043 "bdev_get_bdevs", 00:06:17.043 "bdev_reset_iostat", 00:06:17.043 "bdev_get_iostat", 00:06:17.043 "bdev_examine", 00:06:17.043 "bdev_wait_for_examine", 00:06:17.043 "bdev_set_options", 00:06:17.043 "accel_get_stats", 00:06:17.043 "accel_set_options", 00:06:17.043 "accel_set_driver", 00:06:17.043 "accel_crypto_key_destroy", 00:06:17.043 "accel_crypto_keys_get", 00:06:17.043 "accel_crypto_key_create", 00:06:17.043 "accel_assign_opc", 00:06:17.043 "accel_get_module_info", 00:06:17.043 "accel_get_opc_assignments", 00:06:17.043 "vmd_rescan", 00:06:17.043 "vmd_remove_device", 00:06:17.043 "vmd_enable", 00:06:17.043 "sock_get_default_impl", 00:06:17.043 "sock_set_default_impl", 00:06:17.043 "sock_impl_set_options", 00:06:17.043 "sock_impl_get_options", 00:06:17.043 "iobuf_get_stats", 00:06:17.043 "iobuf_set_options", 00:06:17.043 "keyring_get_keys", 00:06:17.043 "framework_get_pci_devices", 00:06:17.043 "framework_get_config", 00:06:17.043 "framework_get_subsystems", 00:06:17.043 "fsdev_set_opts", 00:06:17.043 "fsdev_get_opts", 00:06:17.043 "trace_get_info", 00:06:17.043 "trace_get_tpoint_group_mask", 00:06:17.043 "trace_disable_tpoint_group", 00:06:17.043 "trace_enable_tpoint_group", 00:06:17.043 "trace_clear_tpoint_mask", 00:06:17.043 "trace_set_tpoint_mask", 00:06:17.043 "notify_get_notifications", 00:06:17.043 "notify_get_types", 00:06:17.043 "spdk_get_version", 00:06:17.043 "rpc_get_methods" 00:06:17.043 ] 00:06:17.303 11:16:00 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:17.303 11:16:00 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:17.303 11:16:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.303 11:16:00 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:17.303 11:16:00 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57895 00:06:17.303 11:16:00 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57895 ']' 00:06:17.303 11:16:00 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57895 00:06:17.303 11:16:00 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:17.303 11:16:00 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.303 11:16:00 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57895 00:06:17.303 11:16:00 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.303 11:16:00 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.303 11:16:00 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57895' 00:06:17.303 killing process with pid 57895 00:06:17.303 11:16:00 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57895 00:06:17.303 11:16:00 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57895 00:06:19.845 00:06:19.845 real 0m4.240s 00:06:19.845 user 0m7.551s 00:06:19.845 sys 0m0.613s 00:06:19.845 11:16:02 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.845 11:16:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:19.845 ************************************ 00:06:19.845 END TEST spdkcli_tcp 00:06:19.845 ************************************ 00:06:19.845 11:16:02 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:19.845 11:16:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.845 11:16:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.845 11:16:02 -- common/autotest_common.sh@10 -- # set +x 00:06:19.845 ************************************ 00:06:19.845 START TEST dpdk_mem_utility 00:06:19.845 ************************************ 00:06:19.845 11:16:02 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:19.845 * Looking for test storage... 00:06:19.845 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:19.845 11:16:02 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:19.845 11:16:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:19.845 11:16:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:19.845 11:16:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:19.845 11:16:02 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.845 11:16:02 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.845 11:16:02 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.845 11:16:02 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.845 11:16:02 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.845 11:16:02 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.845 11:16:02 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.845 11:16:02 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.845 11:16:02 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.845 11:16:02 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.845 11:16:02 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.845 11:16:02 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:19.845 11:16:02 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:19.845 11:16:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.845 11:16:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.845 11:16:02 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:19.845 11:16:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:19.845 11:16:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.845 11:16:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:19.845 11:16:02 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.845 11:16:02 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:19.845 11:16:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:19.846 11:16:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.846 11:16:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:19.846 11:16:02 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.846 11:16:02 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.846 11:16:02 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.846 11:16:02 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:19.846 11:16:02 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.846 11:16:02 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:19.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.846 --rc genhtml_branch_coverage=1 00:06:19.846 --rc genhtml_function_coverage=1 00:06:19.846 --rc genhtml_legend=1 00:06:19.846 --rc geninfo_all_blocks=1 00:06:19.846 --rc geninfo_unexecuted_blocks=1 00:06:19.846 00:06:19.846 ' 00:06:19.846 11:16:02 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:19.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.846 --rc genhtml_branch_coverage=1 00:06:19.846 --rc genhtml_function_coverage=1 00:06:19.846 --rc genhtml_legend=1 00:06:19.846 --rc geninfo_all_blocks=1 00:06:19.846 --rc geninfo_unexecuted_blocks=1 00:06:19.846 00:06:19.846 ' 00:06:19.846 11:16:02 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:19.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.846 --rc genhtml_branch_coverage=1 00:06:19.846 --rc genhtml_function_coverage=1 00:06:19.846 --rc genhtml_legend=1 00:06:19.846 --rc geninfo_all_blocks=1 00:06:19.846 --rc geninfo_unexecuted_blocks=1 00:06:19.846 00:06:19.846 ' 00:06:19.846 11:16:02 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:19.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.846 --rc genhtml_branch_coverage=1 00:06:19.846 --rc genhtml_function_coverage=1 00:06:19.846 --rc genhtml_legend=1 00:06:19.846 --rc geninfo_all_blocks=1 00:06:19.846 --rc geninfo_unexecuted_blocks=1 00:06:19.846 00:06:19.846 ' 00:06:19.846 11:16:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:19.846 11:16:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58017 00:06:19.846 11:16:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:19.846 11:16:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58017 00:06:19.846 11:16:02 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58017 ']' 00:06:19.846 11:16:02 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.846 11:16:02 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.846 11:16:02 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.846 11:16:02 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.846 11:16:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:20.106 [2024-11-20 11:16:03.062390] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:06:20.106 [2024-11-20 11:16:03.062666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58017 ] 00:06:20.366 [2024-11-20 11:16:03.242021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.366 [2024-11-20 11:16:03.357915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.308 11:16:04 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.308 11:16:04 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:21.308 11:16:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:21.308 11:16:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:21.308 11:16:04 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.308 11:16:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:21.308 { 00:06:21.308 "filename": "/tmp/spdk_mem_dump.txt" 00:06:21.308 } 00:06:21.308 11:16:04 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.308 11:16:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:21.308 DPDK memory size 816.000000 MiB in 1 heap(s) 00:06:21.308 1 heaps totaling size 816.000000 MiB 00:06:21.308 size: 816.000000 MiB heap id: 0 00:06:21.308 end heaps---------- 00:06:21.308 9 mempools totaling size 595.772034 MiB 00:06:21.308 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:21.308 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:21.308 size: 92.545471 MiB name: bdev_io_58017 00:06:21.308 size: 50.003479 MiB name: msgpool_58017 00:06:21.308 size: 36.509338 MiB name: fsdev_io_58017 00:06:21.308 size: 21.763794 MiB name: PDU_Pool 00:06:21.308 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:21.308 size: 4.133484 MiB name: evtpool_58017 00:06:21.308 size: 0.026123 MiB name: Session_Pool 00:06:21.308 end mempools------- 00:06:21.308 6 memzones totaling size 4.142822 MiB 00:06:21.308 size: 1.000366 MiB name: RG_ring_0_58017 00:06:21.308 size: 1.000366 MiB name: RG_ring_1_58017 00:06:21.308 size: 1.000366 MiB name: RG_ring_4_58017 00:06:21.308 size: 1.000366 MiB name: RG_ring_5_58017 00:06:21.308 size: 0.125366 MiB name: RG_ring_2_58017 00:06:21.308 size: 0.015991 MiB name: RG_ring_3_58017 00:06:21.308 end memzones------- 00:06:21.308 11:16:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:21.308 heap id: 0 total size: 816.000000 MiB number of busy elements: 312 number of free elements: 18 00:06:21.308 list of free elements. size: 16.792114 MiB 00:06:21.308 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:21.308 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:21.308 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:21.308 element at address: 0x200018d00040 with size: 0.999939 MiB 00:06:21.308 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:21.308 element at address: 0x200019200000 with size: 0.999084 MiB 00:06:21.308 element at address: 0x200031e00000 with size: 0.994324 MiB 00:06:21.308 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:21.308 element at address: 0x200018a00000 with size: 0.959656 MiB 00:06:21.308 element at address: 0x200019500040 with size: 0.936401 MiB 00:06:21.308 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:21.308 element at address: 0x20001ac00000 with size: 0.562439 MiB 00:06:21.308 element at address: 0x200000c00000 with size: 0.490173 MiB 00:06:21.308 element at address: 0x200018e00000 with size: 0.487976 MiB 00:06:21.308 element at address: 0x200019600000 with size: 0.485413 MiB 00:06:21.308 element at address: 0x200012c00000 with size: 0.443481 MiB 00:06:21.308 element at address: 0x200028000000 with size: 0.390442 MiB 00:06:21.308 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:21.308 list of standard malloc elements. size: 199.286987 MiB 00:06:21.308 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:21.308 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:21.308 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:06:21.308 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:21.308 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:21.308 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:21.308 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:06:21.308 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:21.308 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:21.308 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:06:21.308 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:21.308 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:21.308 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:21.308 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:21.308 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:21.308 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:21.308 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:21.308 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:21.308 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:21.308 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:21.308 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:21.308 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:21.308 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:21.309 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200012c71880 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200012c71980 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200012c72080 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200012c72180 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:06:21.309 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:06:21.309 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:06:21.309 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:06:21.310 element at address: 0x200028063f40 with size: 0.000244 MiB 00:06:21.310 element at address: 0x200028064040 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806af80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806b080 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806b180 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806b280 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806b380 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806b480 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806b580 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806b680 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806b780 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806b880 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806b980 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806be80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806c080 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806c180 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806c280 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806c380 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806c480 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806c580 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806c680 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806c780 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806c880 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806c980 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806d080 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806d180 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806d280 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806d380 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806d480 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806d580 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806d680 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806d780 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806d880 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806d980 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806da80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806db80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806de80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806df80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806e080 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806e180 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806e280 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806e380 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806e480 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806e580 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806e680 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806e780 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806e880 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806e980 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806f080 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806f180 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806f280 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806f380 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806f480 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806f580 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806f680 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806f780 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806f880 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806f980 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:06:21.310 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:06:21.310 list of memzone associated elements. size: 599.920898 MiB 00:06:21.310 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:06:21.310 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:21.310 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:06:21.310 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:21.310 element at address: 0x200012df4740 with size: 92.045105 MiB 00:06:21.310 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58017_0 00:06:21.310 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:21.310 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58017_0 00:06:21.310 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:21.310 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58017_0 00:06:21.310 element at address: 0x2000197be900 with size: 20.255615 MiB 00:06:21.310 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:21.310 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:06:21.310 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:21.310 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:21.310 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58017_0 00:06:21.310 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:21.310 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58017 00:06:21.310 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:21.310 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58017 00:06:21.310 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:21.310 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:21.310 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:06:21.310 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:21.310 element at address: 0x200018afde00 with size: 1.008179 MiB 00:06:21.310 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:21.310 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:06:21.310 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:21.310 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:21.310 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58017 00:06:21.310 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:21.310 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58017 00:06:21.310 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:06:21.310 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58017 00:06:21.310 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:06:21.310 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58017 00:06:21.310 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:21.310 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58017 00:06:21.310 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:21.310 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58017 00:06:21.311 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:06:21.311 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:21.311 element at address: 0x200012c72280 with size: 0.500549 MiB 00:06:21.311 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:21.311 element at address: 0x20001967c440 with size: 0.250549 MiB 00:06:21.311 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:21.311 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:21.311 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58017 00:06:21.311 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:21.311 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58017 00:06:21.311 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:06:21.311 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:21.311 element at address: 0x200028064140 with size: 0.023804 MiB 00:06:21.311 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:21.311 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:21.311 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58017 00:06:21.311 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:06:21.311 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:21.311 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:21.311 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58017 00:06:21.311 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:21.311 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58017 00:06:21.311 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:21.311 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58017 00:06:21.311 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:06:21.311 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:21.311 11:16:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:21.311 11:16:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58017 00:06:21.311 11:16:04 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58017 ']' 00:06:21.311 11:16:04 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58017 00:06:21.311 11:16:04 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:21.311 11:16:04 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.311 11:16:04 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58017 00:06:21.311 killing process with pid 58017 00:06:21.311 11:16:04 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.311 11:16:04 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.311 11:16:04 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58017' 00:06:21.311 11:16:04 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58017 00:06:21.311 11:16:04 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58017 00:06:24.600 ************************************ 00:06:24.600 END TEST dpdk_mem_utility 00:06:24.600 ************************************ 00:06:24.600 00:06:24.600 real 0m4.230s 00:06:24.600 user 0m4.184s 00:06:24.600 sys 0m0.549s 00:06:24.600 11:16:06 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.600 11:16:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:24.600 11:16:07 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:24.600 11:16:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.600 11:16:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.600 11:16:07 -- common/autotest_common.sh@10 -- # set +x 00:06:24.600 ************************************ 00:06:24.600 START TEST event 00:06:24.600 ************************************ 00:06:24.600 11:16:07 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:24.600 * Looking for test storage... 00:06:24.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:24.600 11:16:07 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.600 11:16:07 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.600 11:16:07 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.600 11:16:07 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.600 11:16:07 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.600 11:16:07 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.600 11:16:07 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.600 11:16:07 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.600 11:16:07 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.600 11:16:07 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.600 11:16:07 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.600 11:16:07 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.600 11:16:07 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.600 11:16:07 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.600 11:16:07 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.600 11:16:07 event -- scripts/common.sh@344 -- # case "$op" in 00:06:24.600 11:16:07 event -- scripts/common.sh@345 -- # : 1 00:06:24.600 11:16:07 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.600 11:16:07 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.600 11:16:07 event -- scripts/common.sh@365 -- # decimal 1 00:06:24.600 11:16:07 event -- scripts/common.sh@353 -- # local d=1 00:06:24.600 11:16:07 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.600 11:16:07 event -- scripts/common.sh@355 -- # echo 1 00:06:24.600 11:16:07 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.600 11:16:07 event -- scripts/common.sh@366 -- # decimal 2 00:06:24.600 11:16:07 event -- scripts/common.sh@353 -- # local d=2 00:06:24.600 11:16:07 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.600 11:16:07 event -- scripts/common.sh@355 -- # echo 2 00:06:24.600 11:16:07 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.600 11:16:07 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.600 11:16:07 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.600 11:16:07 event -- scripts/common.sh@368 -- # return 0 00:06:24.600 11:16:07 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.600 11:16:07 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.600 --rc genhtml_branch_coverage=1 00:06:24.600 --rc genhtml_function_coverage=1 00:06:24.600 --rc genhtml_legend=1 00:06:24.600 --rc geninfo_all_blocks=1 00:06:24.600 --rc geninfo_unexecuted_blocks=1 00:06:24.600 00:06:24.600 ' 00:06:24.600 11:16:07 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.600 --rc genhtml_branch_coverage=1 00:06:24.600 --rc genhtml_function_coverage=1 00:06:24.600 --rc genhtml_legend=1 00:06:24.600 --rc geninfo_all_blocks=1 00:06:24.600 --rc geninfo_unexecuted_blocks=1 00:06:24.600 00:06:24.600 ' 00:06:24.600 11:16:07 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.600 --rc genhtml_branch_coverage=1 00:06:24.600 --rc genhtml_function_coverage=1 00:06:24.600 --rc genhtml_legend=1 00:06:24.600 --rc geninfo_all_blocks=1 00:06:24.600 --rc geninfo_unexecuted_blocks=1 00:06:24.600 00:06:24.600 ' 00:06:24.600 11:16:07 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.600 --rc genhtml_branch_coverage=1 00:06:24.600 --rc genhtml_function_coverage=1 00:06:24.600 --rc genhtml_legend=1 00:06:24.600 --rc geninfo_all_blocks=1 00:06:24.600 --rc geninfo_unexecuted_blocks=1 00:06:24.600 00:06:24.600 ' 00:06:24.600 11:16:07 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:24.600 11:16:07 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:24.600 11:16:07 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:24.600 11:16:07 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:24.600 11:16:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.600 11:16:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.600 ************************************ 00:06:24.600 START TEST event_perf 00:06:24.600 ************************************ 00:06:24.600 11:16:07 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:24.600 Running I/O for 1 seconds...[2024-11-20 11:16:07.302764] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:06:24.600 [2024-11-20 11:16:07.302954] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58131 ] 00:06:24.600 [2024-11-20 11:16:07.474128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:24.600 [2024-11-20 11:16:07.602648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.600 [2024-11-20 11:16:07.602698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.600 [2024-11-20 11:16:07.602762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.600 Running I/O for 1 seconds...[2024-11-20 11:16:07.602808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:25.982 00:06:25.982 lcore 0: 108184 00:06:25.982 lcore 1: 108187 00:06:25.982 lcore 2: 108190 00:06:25.982 lcore 3: 108187 00:06:25.982 done. 00:06:25.982 00:06:25.982 real 0m1.631s 00:06:25.982 user 0m4.376s 00:06:25.982 sys 0m0.127s 00:06:25.982 11:16:08 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.982 11:16:08 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:25.982 ************************************ 00:06:25.982 END TEST event_perf 00:06:25.982 ************************************ 00:06:25.982 11:16:08 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:25.982 11:16:08 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:25.982 11:16:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.982 11:16:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.982 ************************************ 00:06:25.982 START TEST event_reactor 00:06:25.982 ************************************ 00:06:25.982 11:16:08 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:25.982 [2024-11-20 11:16:08.998289] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:06:25.982 [2024-11-20 11:16:08.998468] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58165 ] 00:06:26.243 [2024-11-20 11:16:09.173199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.243 [2024-11-20 11:16:09.323629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.624 test_start 00:06:27.624 oneshot 00:06:27.624 tick 100 00:06:27.624 tick 100 00:06:27.624 tick 250 00:06:27.624 tick 100 00:06:27.624 tick 100 00:06:27.624 tick 100 00:06:27.624 tick 250 00:06:27.624 tick 500 00:06:27.624 tick 100 00:06:27.624 tick 100 00:06:27.624 tick 250 00:06:27.624 tick 100 00:06:27.624 tick 100 00:06:27.624 test_end 00:06:27.624 00:06:27.624 real 0m1.621s 00:06:27.624 user 0m1.396s 00:06:27.624 sys 0m0.114s 00:06:27.624 11:16:10 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.624 11:16:10 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:27.624 ************************************ 00:06:27.624 END TEST event_reactor 00:06:27.624 ************************************ 00:06:27.624 11:16:10 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:27.624 11:16:10 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:27.624 11:16:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.624 11:16:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.624 ************************************ 00:06:27.624 START TEST event_reactor_perf 00:06:27.624 ************************************ 00:06:27.624 11:16:10 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:27.624 [2024-11-20 11:16:10.688779] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:06:27.624 [2024-11-20 11:16:10.688972] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58207 ] 00:06:27.884 [2024-11-20 11:16:10.872431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.143 [2024-11-20 11:16:11.019444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.525 test_start 00:06:29.525 test_end 00:06:29.525 Performance: 366220 events per second 00:06:29.525 00:06:29.525 real 0m1.640s 00:06:29.525 user 0m1.408s 00:06:29.525 sys 0m0.123s 00:06:29.525 ************************************ 00:06:29.525 END TEST event_reactor_perf 00:06:29.525 ************************************ 00:06:29.525 11:16:12 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.525 11:16:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:29.525 11:16:12 event -- event/event.sh@49 -- # uname -s 00:06:29.525 11:16:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:29.525 11:16:12 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:29.525 11:16:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.525 11:16:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.525 11:16:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.525 ************************************ 00:06:29.525 START TEST event_scheduler 00:06:29.525 ************************************ 00:06:29.525 11:16:12 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:29.525 * Looking for test storage... 00:06:29.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:29.525 11:16:12 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:29.525 11:16:12 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:29.525 11:16:12 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:29.525 11:16:12 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.525 11:16:12 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:29.525 11:16:12 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.525 11:16:12 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:29.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.525 --rc genhtml_branch_coverage=1 00:06:29.525 --rc genhtml_function_coverage=1 00:06:29.525 --rc genhtml_legend=1 00:06:29.525 --rc geninfo_all_blocks=1 00:06:29.525 --rc geninfo_unexecuted_blocks=1 00:06:29.525 00:06:29.525 ' 00:06:29.525 11:16:12 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:29.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.525 --rc genhtml_branch_coverage=1 00:06:29.525 --rc genhtml_function_coverage=1 00:06:29.525 --rc genhtml_legend=1 00:06:29.525 --rc geninfo_all_blocks=1 00:06:29.525 --rc geninfo_unexecuted_blocks=1 00:06:29.525 00:06:29.525 ' 00:06:29.526 11:16:12 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:29.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.526 --rc genhtml_branch_coverage=1 00:06:29.526 --rc genhtml_function_coverage=1 00:06:29.526 --rc genhtml_legend=1 00:06:29.526 --rc geninfo_all_blocks=1 00:06:29.526 --rc geninfo_unexecuted_blocks=1 00:06:29.526 00:06:29.526 ' 00:06:29.526 11:16:12 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:29.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.526 --rc genhtml_branch_coverage=1 00:06:29.526 --rc genhtml_function_coverage=1 00:06:29.526 --rc genhtml_legend=1 00:06:29.526 --rc geninfo_all_blocks=1 00:06:29.526 --rc geninfo_unexecuted_blocks=1 00:06:29.526 00:06:29.526 ' 00:06:29.526 11:16:12 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:29.526 11:16:12 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58283 00:06:29.526 11:16:12 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:29.526 11:16:12 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:29.526 11:16:12 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58283 00:06:29.526 11:16:12 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58283 ']' 00:06:29.526 11:16:12 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.526 11:16:12 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.526 11:16:12 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.526 11:16:12 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.526 11:16:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:29.785 [2024-11-20 11:16:12.669040] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:06:29.785 [2024-11-20 11:16:12.669283] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58283 ] 00:06:29.785 [2024-11-20 11:16:12.850208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.044 [2024-11-20 11:16:12.971772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.044 [2024-11-20 11:16:12.971999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.044 [2024-11-20 11:16:12.972088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.044 [2024-11-20 11:16:12.972128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.611 11:16:13 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.611 11:16:13 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:30.611 11:16:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:30.611 11:16:13 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.611 11:16:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.611 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:30.611 POWER: Cannot set governor of lcore 0 to userspace 00:06:30.611 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:30.611 POWER: Cannot set governor of lcore 0 to performance 00:06:30.611 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:30.611 POWER: Cannot set governor of lcore 0 to userspace 00:06:30.611 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:30.611 POWER: Cannot set governor of lcore 0 to userspace 00:06:30.611 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:30.611 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:30.611 POWER: Unable to set Power Management Environment for lcore 0 00:06:30.611 [2024-11-20 11:16:13.533723] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:30.611 [2024-11-20 11:16:13.533779] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:30.611 [2024-11-20 11:16:13.533821] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:30.611 [2024-11-20 11:16:13.533875] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:30.611 [2024-11-20 11:16:13.533908] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:30.611 [2024-11-20 11:16:13.533948] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:30.611 11:16:13 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.611 11:16:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:30.611 11:16:13 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.611 11:16:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.870 [2024-11-20 11:16:13.905395] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:30.870 11:16:13 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.870 11:16:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:30.870 11:16:13 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.870 11:16:13 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.870 11:16:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.870 ************************************ 00:06:30.870 START TEST scheduler_create_thread 00:06:30.870 ************************************ 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.870 2 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.870 3 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.870 4 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.870 5 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.870 6 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.870 7 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.870 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.129 8 00:06:31.129 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.129 11:16:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:31.129 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.129 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.129 9 00:06:31.129 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.129 11:16:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:31.129 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.129 11:16:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.129 10 00:06:31.129 11:16:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.129 11:16:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:31.129 11:16:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.129 11:16:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.129 11:16:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.129 11:16:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:31.129 11:16:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:31.129 11:16:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.129 11:16:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.129 11:16:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.129 11:16:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:31.129 11:16:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.129 11:16:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.507 11:16:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.507 11:16:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:32.507 11:16:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:32.507 11:16:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.507 11:16:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.449 ************************************ 00:06:33.449 END TEST scheduler_create_thread 00:06:33.449 ************************************ 00:06:33.449 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.449 00:06:33.449 real 0m2.617s 00:06:33.449 user 0m0.028s 00:06:33.449 sys 0m0.008s 00:06:33.449 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.449 11:16:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.710 11:16:16 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:33.710 11:16:16 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58283 00:06:33.710 11:16:16 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58283 ']' 00:06:33.710 11:16:16 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58283 00:06:33.710 11:16:16 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:33.710 11:16:16 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.710 11:16:16 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58283 00:06:33.710 killing process with pid 58283 00:06:33.710 11:16:16 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:33.710 11:16:16 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:33.710 11:16:16 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58283' 00:06:33.710 11:16:16 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58283 00:06:33.710 11:16:16 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58283 00:06:33.969 [2024-11-20 11:16:17.014683] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:35.348 ************************************ 00:06:35.348 END TEST event_scheduler 00:06:35.348 ************************************ 00:06:35.348 00:06:35.348 real 0m5.923s 00:06:35.348 user 0m10.093s 00:06:35.348 sys 0m0.528s 00:06:35.348 11:16:18 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.348 11:16:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:35.348 11:16:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:35.348 11:16:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:35.348 11:16:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.348 11:16:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.348 11:16:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.348 ************************************ 00:06:35.348 START TEST app_repeat 00:06:35.348 ************************************ 00:06:35.348 11:16:18 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:35.348 11:16:18 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.348 11:16:18 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.348 11:16:18 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:35.348 11:16:18 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.348 11:16:18 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:35.348 11:16:18 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:35.348 11:16:18 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:35.348 Process app_repeat pid: 58389 00:06:35.348 spdk_app_start Round 0 00:06:35.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.348 11:16:18 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58389 00:06:35.348 11:16:18 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:35.348 11:16:18 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58389' 00:06:35.348 11:16:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:35.348 11:16:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:35.348 11:16:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58389 /var/tmp/spdk-nbd.sock 00:06:35.348 11:16:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58389 ']' 00:06:35.348 11:16:18 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:35.348 11:16:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.348 11:16:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.348 11:16:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.348 11:16:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.348 11:16:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.348 [2024-11-20 11:16:18.412734] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:06:35.348 [2024-11-20 11:16:18.412945] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58389 ] 00:06:35.608 [2024-11-20 11:16:18.592988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.867 [2024-11-20 11:16:18.738790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.867 [2024-11-20 11:16:18.738835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.453 11:16:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.453 11:16:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:36.453 11:16:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.737 Malloc0 00:06:36.737 11:16:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.996 Malloc1 00:06:36.996 11:16:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.996 11:16:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.996 11:16:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.996 11:16:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:36.996 11:16:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.996 11:16:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:36.996 11:16:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.996 11:16:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.996 11:16:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.996 11:16:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.996 11:16:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.996 11:16:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.996 11:16:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:36.996 11:16:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.996 11:16:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.996 11:16:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:37.257 /dev/nbd0 00:06:37.257 11:16:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:37.257 11:16:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:37.257 11:16:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:37.257 11:16:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:37.257 11:16:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.257 11:16:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.257 11:16:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:37.257 11:16:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:37.257 11:16:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.257 11:16:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.257 11:16:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.257 1+0 records in 00:06:37.257 1+0 records out 00:06:37.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524699 s, 7.8 MB/s 00:06:37.257 11:16:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.257 11:16:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:37.257 11:16:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.257 11:16:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.257 11:16:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:37.257 11:16:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.257 11:16:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.257 11:16:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:37.517 /dev/nbd1 00:06:37.517 11:16:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:37.517 11:16:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:37.517 11:16:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:37.517 11:16:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:37.517 11:16:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.517 11:16:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.517 11:16:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:37.517 11:16:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:37.517 11:16:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.517 11:16:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.517 11:16:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.517 1+0 records in 00:06:37.517 1+0 records out 00:06:37.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275246 s, 14.9 MB/s 00:06:37.517 11:16:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.517 11:16:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:37.517 11:16:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.517 11:16:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.517 11:16:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:37.517 11:16:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.517 11:16:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.517 11:16:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.517 11:16:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.517 11:16:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.776 11:16:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:37.776 { 00:06:37.776 "nbd_device": "/dev/nbd0", 00:06:37.776 "bdev_name": "Malloc0" 00:06:37.776 }, 00:06:37.776 { 00:06:37.776 "nbd_device": "/dev/nbd1", 00:06:37.776 "bdev_name": "Malloc1" 00:06:37.776 } 00:06:37.776 ]' 00:06:37.776 11:16:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:37.776 { 00:06:37.776 "nbd_device": "/dev/nbd0", 00:06:37.776 "bdev_name": "Malloc0" 00:06:37.776 }, 00:06:37.777 { 00:06:37.777 "nbd_device": "/dev/nbd1", 00:06:37.777 "bdev_name": "Malloc1" 00:06:37.777 } 00:06:37.777 ]' 00:06:37.777 11:16:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.777 11:16:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:37.777 /dev/nbd1' 00:06:37.777 11:16:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:37.777 /dev/nbd1' 00:06:37.777 11:16:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.777 11:16:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:37.777 11:16:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:37.777 11:16:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:37.777 11:16:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:37.777 11:16:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:37.777 11:16:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.777 11:16:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.777 11:16:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:37.777 11:16:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:37.777 11:16:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:37.777 11:16:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:37.777 256+0 records in 00:06:37.777 256+0 records out 00:06:37.777 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00568436 s, 184 MB/s 00:06:37.777 11:16:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.777 11:16:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:37.777 256+0 records in 00:06:37.777 256+0 records out 00:06:37.777 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206129 s, 50.9 MB/s 00:06:37.777 11:16:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.777 11:16:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:38.036 256+0 records in 00:06:38.036 256+0 records out 00:06:38.036 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0314809 s, 33.3 MB/s 00:06:38.036 11:16:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:38.036 11:16:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.036 11:16:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.036 11:16:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:38.036 11:16:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:38.036 11:16:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:38.036 11:16:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:38.036 11:16:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.036 11:16:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:38.036 11:16:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.036 11:16:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:38.036 11:16:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:38.036 11:16:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:38.036 11:16:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.036 11:16:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.036 11:16:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:38.036 11:16:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:38.036 11:16:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.036 11:16:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:38.294 11:16:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:38.294 11:16:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:38.294 11:16:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:38.294 11:16:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.294 11:16:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.294 11:16:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:38.294 11:16:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.294 11:16:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.294 11:16:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.294 11:16:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:38.552 11:16:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:38.552 11:16:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:38.552 11:16:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:38.552 11:16:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.552 11:16:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.552 11:16:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:38.552 11:16:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.552 11:16:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.552 11:16:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.552 11:16:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.552 11:16:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.810 11:16:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:38.810 11:16:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.810 11:16:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:38.810 11:16:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:38.810 11:16:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:38.810 11:16:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.810 11:16:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:38.810 11:16:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:38.810 11:16:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:38.810 11:16:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:38.810 11:16:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:38.810 11:16:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:38.810 11:16:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:39.378 11:16:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:40.756 [2024-11-20 11:16:23.485390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.756 [2024-11-20 11:16:23.613175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.756 [2024-11-20 11:16:23.613176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.756 [2024-11-20 11:16:23.832223] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:40.756 [2024-11-20 11:16:23.832302] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:42.134 spdk_app_start Round 1 00:06:42.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:42.134 11:16:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:42.134 11:16:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:42.134 11:16:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58389 /var/tmp/spdk-nbd.sock 00:06:42.134 11:16:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58389 ']' 00:06:42.134 11:16:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:42.134 11:16:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.134 11:16:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:42.134 11:16:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.134 11:16:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:42.394 11:16:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.394 11:16:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:42.394 11:16:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:42.653 Malloc0 00:06:42.912 11:16:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:43.172 Malloc1 00:06:43.172 11:16:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.172 11:16:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.172 11:16:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.172 11:16:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:43.172 11:16:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.172 11:16:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:43.172 11:16:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:43.172 11:16:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.172 11:16:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.172 11:16:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:43.172 11:16:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.172 11:16:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:43.172 11:16:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:43.172 11:16:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:43.172 11:16:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.172 11:16:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:43.432 /dev/nbd0 00:06:43.432 11:16:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:43.432 11:16:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:43.432 11:16:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:43.432 11:16:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:43.432 11:16:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.432 11:16:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.432 11:16:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:43.432 11:16:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:43.432 11:16:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.432 11:16:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.432 11:16:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.432 1+0 records in 00:06:43.432 1+0 records out 00:06:43.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431476 s, 9.5 MB/s 00:06:43.432 11:16:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.432 11:16:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:43.432 11:16:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.432 11:16:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.432 11:16:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:43.432 11:16:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.432 11:16:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.432 11:16:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:43.691 /dev/nbd1 00:06:43.691 11:16:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:43.691 11:16:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:43.691 11:16:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:43.691 11:16:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:43.691 11:16:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.691 11:16:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.691 11:16:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:43.691 11:16:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:43.691 11:16:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.691 11:16:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.691 11:16:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.691 1+0 records in 00:06:43.691 1+0 records out 00:06:43.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493791 s, 8.3 MB/s 00:06:43.691 11:16:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.691 11:16:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:43.691 11:16:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.691 11:16:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.691 11:16:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:43.691 11:16:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.691 11:16:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.691 11:16:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:43.691 11:16:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.691 11:16:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.950 11:16:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:43.950 { 00:06:43.950 "nbd_device": "/dev/nbd0", 00:06:43.950 "bdev_name": "Malloc0" 00:06:43.950 }, 00:06:43.950 { 00:06:43.950 "nbd_device": "/dev/nbd1", 00:06:43.950 "bdev_name": "Malloc1" 00:06:43.950 } 00:06:43.950 ]' 00:06:43.950 11:16:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:43.950 { 00:06:43.950 "nbd_device": "/dev/nbd0", 00:06:43.950 "bdev_name": "Malloc0" 00:06:43.950 }, 00:06:43.950 { 00:06:43.950 "nbd_device": "/dev/nbd1", 00:06:43.950 "bdev_name": "Malloc1" 00:06:43.950 } 00:06:43.950 ]' 00:06:43.950 11:16:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:44.211 /dev/nbd1' 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:44.211 /dev/nbd1' 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:44.211 256+0 records in 00:06:44.211 256+0 records out 00:06:44.211 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137521 s, 76.2 MB/s 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:44.211 256+0 records in 00:06:44.211 256+0 records out 00:06:44.211 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275454 s, 38.1 MB/s 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:44.211 256+0 records in 00:06:44.211 256+0 records out 00:06:44.211 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0316516 s, 33.1 MB/s 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.211 11:16:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:44.471 11:16:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:44.471 11:16:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:44.471 11:16:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:44.471 11:16:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.471 11:16:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.471 11:16:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:44.471 11:16:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:44.471 11:16:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.471 11:16:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.471 11:16:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:44.733 11:16:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:44.733 11:16:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:44.733 11:16:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:44.733 11:16:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.733 11:16:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.733 11:16:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:44.733 11:16:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:44.733 11:16:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.733 11:16:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.733 11:16:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.733 11:16:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.992 11:16:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:44.992 11:16:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:44.992 11:16:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.992 11:16:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:44.992 11:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:44.992 11:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.992 11:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:44.992 11:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:44.992 11:16:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:44.992 11:16:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:44.992 11:16:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:44.992 11:16:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:44.992 11:16:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:45.561 11:16:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:46.939 [2024-11-20 11:16:29.798551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.939 [2024-11-20 11:16:29.929575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.939 [2024-11-20 11:16:29.929611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.198 [2024-11-20 11:16:30.151915] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:47.198 [2024-11-20 11:16:30.152090] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:48.579 spdk_app_start Round 2 00:06:48.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:48.579 11:16:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:48.579 11:16:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:48.579 11:16:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58389 /var/tmp/spdk-nbd.sock 00:06:48.579 11:16:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58389 ']' 00:06:48.579 11:16:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:48.579 11:16:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.579 11:16:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:48.579 11:16:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.579 11:16:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:48.839 11:16:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.839 11:16:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:48.839 11:16:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.099 Malloc0 00:06:49.099 11:16:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.359 Malloc1 00:06:49.359 11:16:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.359 11:16:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.359 11:16:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.359 11:16:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:49.359 11:16:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.359 11:16:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:49.359 11:16:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.359 11:16:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.359 11:16:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.359 11:16:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:49.359 11:16:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.359 11:16:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:49.359 11:16:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:49.359 11:16:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:49.359 11:16:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.359 11:16:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:49.619 /dev/nbd0 00:06:49.619 11:16:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:49.619 11:16:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:49.619 11:16:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:49.619 11:16:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:49.619 11:16:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:49.619 11:16:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:49.619 11:16:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:49.619 11:16:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:49.619 11:16:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:49.619 11:16:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:49.619 11:16:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.878 1+0 records in 00:06:49.878 1+0 records out 00:06:49.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523475 s, 7.8 MB/s 00:06:49.878 11:16:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:49.878 11:16:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:49.878 11:16:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:49.878 11:16:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:49.878 11:16:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:49.878 11:16:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.878 11:16:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.878 11:16:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:50.138 /dev/nbd1 00:06:50.138 11:16:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:50.138 11:16:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:50.138 11:16:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:50.138 11:16:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:50.138 11:16:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:50.138 11:16:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:50.138 11:16:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:50.138 11:16:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:50.138 11:16:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:50.138 11:16:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:50.138 11:16:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:50.138 1+0 records in 00:06:50.138 1+0 records out 00:06:50.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287123 s, 14.3 MB/s 00:06:50.138 11:16:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:50.138 11:16:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:50.138 11:16:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:50.138 11:16:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:50.138 11:16:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:50.138 11:16:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.138 11:16:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.138 11:16:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.138 11:16:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.138 11:16:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:50.398 { 00:06:50.398 "nbd_device": "/dev/nbd0", 00:06:50.398 "bdev_name": "Malloc0" 00:06:50.398 }, 00:06:50.398 { 00:06:50.398 "nbd_device": "/dev/nbd1", 00:06:50.398 "bdev_name": "Malloc1" 00:06:50.398 } 00:06:50.398 ]' 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:50.398 { 00:06:50.398 "nbd_device": "/dev/nbd0", 00:06:50.398 "bdev_name": "Malloc0" 00:06:50.398 }, 00:06:50.398 { 00:06:50.398 "nbd_device": "/dev/nbd1", 00:06:50.398 "bdev_name": "Malloc1" 00:06:50.398 } 00:06:50.398 ]' 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:50.398 /dev/nbd1' 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:50.398 /dev/nbd1' 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:50.398 256+0 records in 00:06:50.398 256+0 records out 00:06:50.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118077 s, 88.8 MB/s 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:50.398 256+0 records in 00:06:50.398 256+0 records out 00:06:50.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292098 s, 35.9 MB/s 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:50.398 256+0 records in 00:06:50.398 256+0 records out 00:06:50.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0310254 s, 33.8 MB/s 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:50.398 11:16:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:50.399 11:16:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:50.399 11:16:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.399 11:16:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.399 11:16:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:50.399 11:16:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:50.399 11:16:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.399 11:16:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:50.658 11:16:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.658 11:16:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.658 11:16:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.658 11:16:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.658 11:16:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.658 11:16:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.658 11:16:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.659 11:16:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.659 11:16:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.659 11:16:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:51.227 11:16:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:51.227 11:16:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:51.227 11:16:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:51.227 11:16:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.227 11:16:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.227 11:16:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:51.227 11:16:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:51.227 11:16:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.228 11:16:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.228 11:16:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.228 11:16:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.491 11:16:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:51.491 11:16:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:51.491 11:16:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.491 11:16:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:51.491 11:16:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:51.491 11:16:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.491 11:16:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:51.491 11:16:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:51.491 11:16:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:51.491 11:16:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:51.491 11:16:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:51.491 11:16:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:51.491 11:16:34 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:52.099 11:16:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:53.475 [2024-11-20 11:16:36.243184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.475 [2024-11-20 11:16:36.401359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.475 [2024-11-20 11:16:36.401359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.735 [2024-11-20 11:16:36.657652] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:53.735 [2024-11-20 11:16:36.657886] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:55.114 11:16:37 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58389 /var/tmp/spdk-nbd.sock 00:06:55.114 11:16:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58389 ']' 00:06:55.114 11:16:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:55.114 11:16:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.114 11:16:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:55.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:55.114 11:16:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.114 11:16:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:55.114 11:16:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.114 11:16:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:55.114 11:16:38 event.app_repeat -- event/event.sh@39 -- # killprocess 58389 00:06:55.114 11:16:38 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58389 ']' 00:06:55.114 11:16:38 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58389 00:06:55.114 11:16:38 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:55.114 11:16:38 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.114 11:16:38 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58389 00:06:55.114 killing process with pid 58389 00:06:55.114 11:16:38 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.114 11:16:38 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.114 11:16:38 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58389' 00:06:55.114 11:16:38 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58389 00:06:55.114 11:16:38 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58389 00:06:56.501 spdk_app_start is called in Round 0. 00:06:56.501 Shutdown signal received, stop current app iteration 00:06:56.501 Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 reinitialization... 00:06:56.501 spdk_app_start is called in Round 1. 00:06:56.501 Shutdown signal received, stop current app iteration 00:06:56.501 Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 reinitialization... 00:06:56.501 spdk_app_start is called in Round 2. 00:06:56.501 Shutdown signal received, stop current app iteration 00:06:56.501 Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 reinitialization... 00:06:56.501 spdk_app_start is called in Round 3. 00:06:56.501 Shutdown signal received, stop current app iteration 00:06:56.501 ************************************ 00:06:56.501 END TEST app_repeat 00:06:56.501 ************************************ 00:06:56.501 11:16:39 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:56.501 11:16:39 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:56.501 00:06:56.501 real 0m21.102s 00:06:56.501 user 0m45.703s 00:06:56.501 sys 0m3.015s 00:06:56.501 11:16:39 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.501 11:16:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:56.501 11:16:39 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:56.501 11:16:39 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:56.501 11:16:39 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.501 11:16:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.501 11:16:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:56.501 ************************************ 00:06:56.501 START TEST cpu_locks 00:06:56.501 ************************************ 00:06:56.501 11:16:39 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:56.761 * Looking for test storage... 00:06:56.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:56.761 11:16:39 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:56.761 11:16:39 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:56.761 11:16:39 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:56.761 11:16:39 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.761 11:16:39 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:56.761 11:16:39 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.761 11:16:39 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:56.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.761 --rc genhtml_branch_coverage=1 00:06:56.761 --rc genhtml_function_coverage=1 00:06:56.761 --rc genhtml_legend=1 00:06:56.761 --rc geninfo_all_blocks=1 00:06:56.761 --rc geninfo_unexecuted_blocks=1 00:06:56.761 00:06:56.761 ' 00:06:56.761 11:16:39 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:56.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.761 --rc genhtml_branch_coverage=1 00:06:56.761 --rc genhtml_function_coverage=1 00:06:56.761 --rc genhtml_legend=1 00:06:56.761 --rc geninfo_all_blocks=1 00:06:56.761 --rc geninfo_unexecuted_blocks=1 00:06:56.761 00:06:56.761 ' 00:06:56.761 11:16:39 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:56.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.761 --rc genhtml_branch_coverage=1 00:06:56.761 --rc genhtml_function_coverage=1 00:06:56.761 --rc genhtml_legend=1 00:06:56.761 --rc geninfo_all_blocks=1 00:06:56.761 --rc geninfo_unexecuted_blocks=1 00:06:56.761 00:06:56.761 ' 00:06:56.761 11:16:39 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:56.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.761 --rc genhtml_branch_coverage=1 00:06:56.761 --rc genhtml_function_coverage=1 00:06:56.761 --rc genhtml_legend=1 00:06:56.761 --rc geninfo_all_blocks=1 00:06:56.761 --rc geninfo_unexecuted_blocks=1 00:06:56.761 00:06:56.761 ' 00:06:56.761 11:16:39 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:56.761 11:16:39 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:56.761 11:16:39 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:56.761 11:16:39 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:56.761 11:16:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.761 11:16:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.761 11:16:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.761 ************************************ 00:06:56.761 START TEST default_locks 00:06:56.761 ************************************ 00:06:56.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.761 11:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:56.761 11:16:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58858 00:06:56.761 11:16:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.761 11:16:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58858 00:06:56.761 11:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58858 ']' 00:06:56.761 11:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.761 11:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.761 11:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.761 11:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.761 11:16:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.021 [2024-11-20 11:16:39.897841] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:06:57.021 [2024-11-20 11:16:39.898057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58858 ] 00:06:57.021 [2024-11-20 11:16:40.059085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.282 [2024-11-20 11:16:40.222204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.221 11:16:41 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.221 11:16:41 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:58.221 11:16:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58858 00:06:58.221 11:16:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58858 00:06:58.221 11:16:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:58.790 11:16:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58858 00:06:58.790 11:16:41 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58858 ']' 00:06:58.790 11:16:41 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58858 00:06:58.790 11:16:41 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:58.790 11:16:41 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.790 11:16:41 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58858 00:06:58.790 killing process with pid 58858 00:06:58.790 11:16:41 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.790 11:16:41 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.790 11:16:41 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58858' 00:06:58.790 11:16:41 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58858 00:06:58.790 11:16:41 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58858 00:07:01.328 11:16:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58858 00:07:01.328 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:01.329 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58858 00:07:01.329 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:01.329 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.329 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:01.329 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.329 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58858 00:07:01.329 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58858 ']' 00:07:01.329 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.329 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.329 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.329 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.329 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.329 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58858) - No such process 00:07:01.329 ERROR: process (pid: 58858) is no longer running 00:07:01.329 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.329 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:01.329 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:01.329 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:01.329 ************************************ 00:07:01.329 END TEST default_locks 00:07:01.329 ************************************ 00:07:01.329 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:01.329 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:01.329 11:16:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:01.329 11:16:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:01.329 11:16:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:01.329 11:16:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:01.329 00:07:01.329 real 0m4.591s 00:07:01.329 user 0m4.518s 00:07:01.329 sys 0m0.803s 00:07:01.329 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.329 11:16:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.329 11:16:44 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:01.329 11:16:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.329 11:16:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.329 11:16:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.588 ************************************ 00:07:01.588 START TEST default_locks_via_rpc 00:07:01.588 ************************************ 00:07:01.588 11:16:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:01.588 11:16:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58939 00:07:01.588 11:16:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:01.588 11:16:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58939 00:07:01.588 11:16:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58939 ']' 00:07:01.588 11:16:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.588 11:16:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.588 11:16:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.588 11:16:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.588 11:16:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.588 [2024-11-20 11:16:44.559955] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:07:01.588 [2024-11-20 11:16:44.560189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58939 ] 00:07:01.848 [2024-11-20 11:16:44.721355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.848 [2024-11-20 11:16:44.846342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.785 11:16:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.785 11:16:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:02.785 11:16:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:02.785 11:16:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.785 11:16:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.785 11:16:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.785 11:16:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:02.785 11:16:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:02.785 11:16:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:02.785 11:16:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:02.785 11:16:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:02.785 11:16:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.785 11:16:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.785 11:16:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.785 11:16:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58939 00:07:02.785 11:16:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58939 00:07:02.785 11:16:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:03.046 11:16:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58939 00:07:03.046 11:16:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58939 ']' 00:07:03.046 11:16:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58939 00:07:03.046 11:16:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:03.046 11:16:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.046 11:16:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58939 00:07:03.046 11:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.046 11:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.046 11:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58939' 00:07:03.046 killing process with pid 58939 00:07:03.046 11:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58939 00:07:03.046 11:16:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58939 00:07:05.583 00:07:05.583 real 0m4.147s 00:07:05.583 user 0m4.111s 00:07:05.583 sys 0m0.563s 00:07:05.583 ************************************ 00:07:05.583 END TEST default_locks_via_rpc 00:07:05.583 ************************************ 00:07:05.583 11:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.583 11:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.583 11:16:48 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:05.583 11:16:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.583 11:16:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.583 11:16:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.583 ************************************ 00:07:05.583 START TEST non_locking_app_on_locked_coremask 00:07:05.583 ************************************ 00:07:05.583 11:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:05.583 11:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59015 00:07:05.583 11:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:05.583 11:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59015 /var/tmp/spdk.sock 00:07:05.583 11:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59015 ']' 00:07:05.583 11:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.583 11:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.583 11:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.583 11:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.583 11:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.842 [2024-11-20 11:16:48.768506] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:07:05.842 [2024-11-20 11:16:48.768721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59015 ] 00:07:05.842 [2024-11-20 11:16:48.945119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.100 [2024-11-20 11:16:49.068037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.040 11:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.040 11:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:07.040 11:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59031 00:07:07.040 11:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:07.040 11:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59031 /var/tmp/spdk2.sock 00:07:07.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.040 11:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59031 ']' 00:07:07.040 11:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.040 11:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.040 11:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.040 11:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.040 11:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.040 [2024-11-20 11:16:50.087004] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:07:07.040 [2024-11-20 11:16:50.087217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59031 ] 00:07:07.309 [2024-11-20 11:16:50.262675] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:07.309 [2024-11-20 11:16:50.262734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.568 [2024-11-20 11:16:50.520726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.099 11:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.099 11:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:10.099 11:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59015 00:07:10.099 11:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59015 00:07:10.099 11:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:10.357 11:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59015 00:07:10.357 11:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59015 ']' 00:07:10.357 11:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59015 00:07:10.357 11:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:10.357 11:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.357 11:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59015 00:07:10.357 11:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.357 killing process with pid 59015 00:07:10.357 11:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.357 11:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59015' 00:07:10.357 11:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59015 00:07:10.357 11:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59015 00:07:15.662 11:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59031 00:07:15.662 11:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59031 ']' 00:07:15.662 11:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59031 00:07:15.662 11:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:15.662 11:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.662 11:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59031 00:07:15.662 killing process with pid 59031 00:07:15.662 11:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.662 11:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.662 11:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59031' 00:07:15.662 11:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59031 00:07:15.662 11:16:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59031 00:07:18.197 00:07:18.197 real 0m12.205s 00:07:18.197 user 0m12.467s 00:07:18.197 sys 0m1.313s 00:07:18.197 11:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.197 ************************************ 00:07:18.197 END TEST non_locking_app_on_locked_coremask 00:07:18.197 ************************************ 00:07:18.197 11:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.197 11:17:00 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:18.197 11:17:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.197 11:17:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.198 11:17:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.198 ************************************ 00:07:18.198 START TEST locking_app_on_unlocked_coremask 00:07:18.198 ************************************ 00:07:18.198 11:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:18.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.198 11:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59191 00:07:18.198 11:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59191 /var/tmp/spdk.sock 00:07:18.198 11:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:18.198 11:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59191 ']' 00:07:18.198 11:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.198 11:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.198 11:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.198 11:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.198 11:17:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.198 [2024-11-20 11:17:01.031676] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:07:18.198 [2024-11-20 11:17:01.031900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59191 ] 00:07:18.198 [2024-11-20 11:17:01.211677] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:18.198 [2024-11-20 11:17:01.211861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.457 [2024-11-20 11:17:01.336440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.394 11:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.394 11:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:19.394 11:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59207 00:07:19.394 11:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:19.394 11:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59207 /var/tmp/spdk2.sock 00:07:19.394 11:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59207 ']' 00:07:19.394 11:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.394 11:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.394 11:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.394 11:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.394 11:17:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.394 [2024-11-20 11:17:02.390366] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:07:19.394 [2024-11-20 11:17:02.390604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59207 ] 00:07:19.653 [2024-11-20 11:17:02.575715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.912 [2024-11-20 11:17:02.836756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.449 11:17:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.449 11:17:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:22.449 11:17:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59207 00:07:22.449 11:17:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59207 00:07:22.449 11:17:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:22.449 11:17:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59191 00:07:22.449 11:17:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59191 ']' 00:07:22.449 11:17:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59191 00:07:22.449 11:17:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:22.449 11:17:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.449 11:17:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59191 00:07:22.708 killing process with pid 59191 00:07:22.708 11:17:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.708 11:17:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.708 11:17:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59191' 00:07:22.708 11:17:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59191 00:07:22.708 11:17:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59191 00:07:27.978 11:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59207 00:07:27.978 11:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59207 ']' 00:07:27.978 11:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59207 00:07:27.978 11:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:27.978 11:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.978 11:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59207 00:07:28.237 killing process with pid 59207 00:07:28.237 11:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.237 11:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.237 11:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59207' 00:07:28.237 11:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59207 00:07:28.237 11:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59207 00:07:30.813 00:07:30.813 real 0m12.916s 00:07:30.813 user 0m13.370s 00:07:30.813 sys 0m1.255s 00:07:30.813 11:17:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.813 11:17:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.813 ************************************ 00:07:30.813 END TEST locking_app_on_unlocked_coremask 00:07:30.813 ************************************ 00:07:30.813 11:17:13 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:30.813 11:17:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.813 11:17:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.813 11:17:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.813 ************************************ 00:07:30.813 START TEST locking_app_on_locked_coremask 00:07:30.813 ************************************ 00:07:30.813 11:17:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:30.813 11:17:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59370 00:07:30.813 11:17:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:30.813 11:17:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59370 /var/tmp/spdk.sock 00:07:30.813 11:17:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59370 ']' 00:07:30.813 11:17:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.813 11:17:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.813 11:17:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.813 11:17:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.813 11:17:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.073 [2024-11-20 11:17:14.003729] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:07:31.073 [2024-11-20 11:17:14.003862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59370 ] 00:07:31.073 [2024-11-20 11:17:14.164329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.332 [2024-11-20 11:17:14.298944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.269 11:17:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.269 11:17:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:32.269 11:17:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59392 00:07:32.269 11:17:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:32.269 11:17:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59392 /var/tmp/spdk2.sock 00:07:32.269 11:17:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:32.269 11:17:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59392 /var/tmp/spdk2.sock 00:07:32.269 11:17:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:32.269 11:17:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.269 11:17:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:32.269 11:17:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.269 11:17:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59392 /var/tmp/spdk2.sock 00:07:32.269 11:17:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59392 ']' 00:07:32.269 11:17:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:32.269 11:17:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.269 11:17:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:32.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:32.269 11:17:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.269 11:17:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.269 [2024-11-20 11:17:15.364994] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:07:32.269 [2024-11-20 11:17:15.365215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59392 ] 00:07:32.563 [2024-11-20 11:17:15.541448] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59370 has claimed it. 00:07:32.563 [2024-11-20 11:17:15.545564] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:33.148 ERROR: process (pid: 59392) is no longer running 00:07:33.148 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59392) - No such process 00:07:33.148 11:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.148 11:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:33.148 11:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:33.148 11:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:33.148 11:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:33.148 11:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:33.148 11:17:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59370 00:07:33.148 11:17:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59370 00:07:33.148 11:17:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:33.407 11:17:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59370 00:07:33.407 11:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59370 ']' 00:07:33.407 11:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59370 00:07:33.407 11:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:33.407 11:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.407 11:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59370 00:07:33.407 11:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.407 11:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.407 11:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59370' 00:07:33.407 killing process with pid 59370 00:07:33.407 11:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59370 00:07:33.407 11:17:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59370 00:07:36.706 00:07:36.706 real 0m5.302s 00:07:36.706 user 0m5.518s 00:07:36.706 sys 0m0.831s 00:07:36.706 11:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.706 ************************************ 00:07:36.706 END TEST locking_app_on_locked_coremask 00:07:36.706 ************************************ 00:07:36.706 11:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.706 11:17:19 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:36.706 11:17:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.706 11:17:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.706 11:17:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.706 ************************************ 00:07:36.706 START TEST locking_overlapped_coremask 00:07:36.706 ************************************ 00:07:36.706 11:17:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:36.706 11:17:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59467 00:07:36.706 11:17:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:36.706 11:17:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59467 /var/tmp/spdk.sock 00:07:36.706 11:17:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59467 ']' 00:07:36.706 11:17:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.706 11:17:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.706 11:17:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.706 11:17:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.706 11:17:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.706 [2024-11-20 11:17:19.377558] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:07:36.706 [2024-11-20 11:17:19.377714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59467 ] 00:07:36.706 [2024-11-20 11:17:19.558371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.706 [2024-11-20 11:17:19.698917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.706 [2024-11-20 11:17:19.698836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.706 [2024-11-20 11:17:19.698942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.676 11:17:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.676 11:17:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:37.676 11:17:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59485 00:07:37.676 11:17:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59485 /var/tmp/spdk2.sock 00:07:37.676 11:17:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:37.676 11:17:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59485 /var/tmp/spdk2.sock 00:07:37.676 11:17:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:37.676 11:17:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:37.676 11:17:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.676 11:17:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:37.676 11:17:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.676 11:17:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59485 /var/tmp/spdk2.sock 00:07:37.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:37.676 11:17:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59485 ']' 00:07:37.676 11:17:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:37.676 11:17:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.676 11:17:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:37.676 11:17:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.676 11:17:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:37.936 [2024-11-20 11:17:20.834834] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:07:37.936 [2024-11-20 11:17:20.834996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59485 ] 00:07:37.936 [2024-11-20 11:17:21.023033] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59467 has claimed it. 00:07:37.936 [2024-11-20 11:17:21.023115] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:38.504 ERROR: process (pid: 59485) is no longer running 00:07:38.504 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59485) - No such process 00:07:38.504 11:17:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.504 11:17:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:38.504 11:17:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:38.504 11:17:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:38.504 11:17:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:38.504 11:17:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:38.504 11:17:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:38.504 11:17:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:38.504 11:17:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:38.504 11:17:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:38.504 11:17:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59467 00:07:38.504 11:17:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59467 ']' 00:07:38.504 11:17:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59467 00:07:38.504 11:17:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:38.504 11:17:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.504 11:17:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59467 00:07:38.504 11:17:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.504 11:17:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.504 11:17:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59467' 00:07:38.504 killing process with pid 59467 00:07:38.504 11:17:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59467 00:07:38.504 11:17:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59467 00:07:41.803 00:07:41.803 real 0m5.126s 00:07:41.803 user 0m14.037s 00:07:41.803 sys 0m0.624s 00:07:41.803 11:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.803 11:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.803 ************************************ 00:07:41.803 END TEST locking_overlapped_coremask 00:07:41.803 ************************************ 00:07:41.803 11:17:24 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:41.803 11:17:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.803 11:17:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.803 11:17:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:41.803 ************************************ 00:07:41.803 START TEST locking_overlapped_coremask_via_rpc 00:07:41.803 ************************************ 00:07:41.803 11:17:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:41.803 11:17:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59560 00:07:41.803 11:17:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59560 /var/tmp/spdk.sock 00:07:41.803 11:17:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:41.803 11:17:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59560 ']' 00:07:41.803 11:17:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.803 11:17:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.803 11:17:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.803 11:17:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.803 11:17:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.803 [2024-11-20 11:17:24.577942] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:07:41.803 [2024-11-20 11:17:24.578195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59560 ] 00:07:41.803 [2024-11-20 11:17:24.760361] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:41.803 [2024-11-20 11:17:24.760549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.803 [2024-11-20 11:17:24.901720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.803 [2024-11-20 11:17:24.901819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.803 [2024-11-20 11:17:24.901850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.181 11:17:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.181 11:17:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:43.181 11:17:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59578 00:07:43.181 11:17:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59578 /var/tmp/spdk2.sock 00:07:43.181 11:17:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:43.181 11:17:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59578 ']' 00:07:43.181 11:17:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:43.181 11:17:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.181 11:17:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:43.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:43.181 11:17:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.181 11:17:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.181 [2024-11-20 11:17:25.984524] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:07:43.182 [2024-11-20 11:17:25.984750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59578 ] 00:07:43.182 [2024-11-20 11:17:26.164983] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:43.182 [2024-11-20 11:17:26.165072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:43.440 [2024-11-20 11:17:26.433964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.440 [2024-11-20 11:17:26.437669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.440 [2024-11-20 11:17:26.437704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.976 [2024-11-20 11:17:28.658699] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59560 has claimed it. 00:07:45.976 request: 00:07:45.976 { 00:07:45.976 "method": "framework_enable_cpumask_locks", 00:07:45.976 "req_id": 1 00:07:45.976 } 00:07:45.976 Got JSON-RPC error response 00:07:45.976 response: 00:07:45.976 { 00:07:45.976 "code": -32603, 00:07:45.976 "message": "Failed to claim CPU core: 2" 00:07:45.976 } 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59560 /var/tmp/spdk.sock 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59560 ']' 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59578 /var/tmp/spdk2.sock 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59578 ']' 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:45.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.976 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.236 11:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.236 11:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:46.236 11:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:46.236 11:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:46.236 11:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:46.236 11:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:46.236 00:07:46.236 real 0m4.730s 00:07:46.236 user 0m1.521s 00:07:46.236 sys 0m0.210s 00:07:46.236 11:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.236 11:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.236 ************************************ 00:07:46.236 END TEST locking_overlapped_coremask_via_rpc 00:07:46.236 ************************************ 00:07:46.236 11:17:29 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:46.236 11:17:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59560 ]] 00:07:46.236 11:17:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59560 00:07:46.236 11:17:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59560 ']' 00:07:46.236 11:17:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59560 00:07:46.236 11:17:29 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:46.236 11:17:29 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.236 11:17:29 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59560 00:07:46.236 killing process with pid 59560 00:07:46.236 11:17:29 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.236 11:17:29 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.236 11:17:29 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59560' 00:07:46.236 11:17:29 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59560 00:07:46.236 11:17:29 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59560 00:07:49.544 11:17:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59578 ]] 00:07:49.544 11:17:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59578 00:07:49.544 11:17:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59578 ']' 00:07:49.544 11:17:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59578 00:07:49.544 11:17:32 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:49.544 11:17:32 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.544 11:17:32 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59578 00:07:49.544 killing process with pid 59578 00:07:49.544 11:17:32 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:49.544 11:17:32 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:49.544 11:17:32 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59578' 00:07:49.544 11:17:32 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59578 00:07:49.544 11:17:32 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59578 00:07:52.082 11:17:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:52.082 11:17:35 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:52.082 11:17:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59560 ]] 00:07:52.082 11:17:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59560 00:07:52.082 11:17:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59560 ']' 00:07:52.082 11:17:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59560 00:07:52.082 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59560) - No such process 00:07:52.082 Process with pid 59560 is not found 00:07:52.082 11:17:35 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59560 is not found' 00:07:52.082 11:17:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59578 ]] 00:07:52.082 11:17:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59578 00:07:52.082 11:17:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59578 ']' 00:07:52.082 11:17:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59578 00:07:52.082 Process with pid 59578 is not found 00:07:52.082 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59578) - No such process 00:07:52.082 11:17:35 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59578 is not found' 00:07:52.082 11:17:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:52.082 ************************************ 00:07:52.082 END TEST cpu_locks 00:07:52.082 ************************************ 00:07:52.082 00:07:52.082 real 0m55.562s 00:07:52.082 user 1m36.447s 00:07:52.082 sys 0m6.860s 00:07:52.082 11:17:35 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.082 11:17:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:52.082 ************************************ 00:07:52.082 END TEST event 00:07:52.082 ************************************ 00:07:52.082 00:07:52.082 real 1m28.119s 00:07:52.082 user 2m39.655s 00:07:52.082 sys 0m11.178s 00:07:52.082 11:17:35 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.082 11:17:35 event -- common/autotest_common.sh@10 -- # set +x 00:07:52.082 11:17:35 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:52.082 11:17:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.082 11:17:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.082 11:17:35 -- common/autotest_common.sh@10 -- # set +x 00:07:52.082 ************************************ 00:07:52.082 START TEST thread 00:07:52.082 ************************************ 00:07:52.082 11:17:35 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:52.342 * Looking for test storage... 00:07:52.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:52.342 11:17:35 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:52.342 11:17:35 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:52.342 11:17:35 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:52.342 11:17:35 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:52.342 11:17:35 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.342 11:17:35 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.342 11:17:35 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.342 11:17:35 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.342 11:17:35 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.342 11:17:35 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.342 11:17:35 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.342 11:17:35 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.342 11:17:35 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.342 11:17:35 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.342 11:17:35 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.342 11:17:35 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:52.342 11:17:35 thread -- scripts/common.sh@345 -- # : 1 00:07:52.342 11:17:35 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.342 11:17:35 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.342 11:17:35 thread -- scripts/common.sh@365 -- # decimal 1 00:07:52.342 11:17:35 thread -- scripts/common.sh@353 -- # local d=1 00:07:52.342 11:17:35 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.342 11:17:35 thread -- scripts/common.sh@355 -- # echo 1 00:07:52.342 11:17:35 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.342 11:17:35 thread -- scripts/common.sh@366 -- # decimal 2 00:07:52.342 11:17:35 thread -- scripts/common.sh@353 -- # local d=2 00:07:52.342 11:17:35 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.342 11:17:35 thread -- scripts/common.sh@355 -- # echo 2 00:07:52.342 11:17:35 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.342 11:17:35 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.342 11:17:35 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.342 11:17:35 thread -- scripts/common.sh@368 -- # return 0 00:07:52.342 11:17:35 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.342 11:17:35 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:52.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.342 --rc genhtml_branch_coverage=1 00:07:52.342 --rc genhtml_function_coverage=1 00:07:52.342 --rc genhtml_legend=1 00:07:52.342 --rc geninfo_all_blocks=1 00:07:52.342 --rc geninfo_unexecuted_blocks=1 00:07:52.342 00:07:52.342 ' 00:07:52.342 11:17:35 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:52.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.342 --rc genhtml_branch_coverage=1 00:07:52.342 --rc genhtml_function_coverage=1 00:07:52.342 --rc genhtml_legend=1 00:07:52.342 --rc geninfo_all_blocks=1 00:07:52.342 --rc geninfo_unexecuted_blocks=1 00:07:52.342 00:07:52.342 ' 00:07:52.342 11:17:35 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:52.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.342 --rc genhtml_branch_coverage=1 00:07:52.342 --rc genhtml_function_coverage=1 00:07:52.342 --rc genhtml_legend=1 00:07:52.342 --rc geninfo_all_blocks=1 00:07:52.342 --rc geninfo_unexecuted_blocks=1 00:07:52.342 00:07:52.342 ' 00:07:52.342 11:17:35 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:52.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.342 --rc genhtml_branch_coverage=1 00:07:52.342 --rc genhtml_function_coverage=1 00:07:52.342 --rc genhtml_legend=1 00:07:52.342 --rc geninfo_all_blocks=1 00:07:52.342 --rc geninfo_unexecuted_blocks=1 00:07:52.342 00:07:52.342 ' 00:07:52.342 11:17:35 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:52.342 11:17:35 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:52.342 11:17:35 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.342 11:17:35 thread -- common/autotest_common.sh@10 -- # set +x 00:07:52.342 ************************************ 00:07:52.342 START TEST thread_poller_perf 00:07:52.342 ************************************ 00:07:52.342 11:17:35 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:52.601 [2024-11-20 11:17:35.477545] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:07:52.601 [2024-11-20 11:17:35.477761] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59784 ] 00:07:52.601 [2024-11-20 11:17:35.654747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.859 [2024-11-20 11:17:35.785642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.859 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:54.244 [2024-11-20T11:17:37.360Z] ====================================== 00:07:54.244 [2024-11-20T11:17:37.360Z] busy:2302379152 (cyc) 00:07:54.244 [2024-11-20T11:17:37.360Z] total_run_count: 318000 00:07:54.244 [2024-11-20T11:17:37.360Z] tsc_hz: 2290000000 (cyc) 00:07:54.244 [2024-11-20T11:17:37.360Z] ====================================== 00:07:54.244 [2024-11-20T11:17:37.360Z] poller_cost: 7240 (cyc), 3161 (nsec) 00:07:54.244 00:07:54.244 real 0m1.637s 00:07:54.244 user 0m1.427s 00:07:54.244 sys 0m0.097s 00:07:54.244 ************************************ 00:07:54.244 END TEST thread_poller_perf 00:07:54.244 ************************************ 00:07:54.244 11:17:37 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.244 11:17:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:54.244 11:17:37 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:54.244 11:17:37 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:54.244 11:17:37 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.244 11:17:37 thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.244 ************************************ 00:07:54.244 START TEST thread_poller_perf 00:07:54.244 ************************************ 00:07:54.244 11:17:37 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:54.244 [2024-11-20 11:17:37.171399] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:07:54.244 [2024-11-20 11:17:37.171573] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59826 ] 00:07:54.244 [2024-11-20 11:17:37.351742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.503 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:54.503 [2024-11-20 11:17:37.490294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.883 [2024-11-20T11:17:38.999Z] ====================================== 00:07:55.883 [2024-11-20T11:17:38.999Z] busy:2294387864 (cyc) 00:07:55.883 [2024-11-20T11:17:38.999Z] total_run_count: 4296000 00:07:55.883 [2024-11-20T11:17:38.999Z] tsc_hz: 2290000000 (cyc) 00:07:55.883 [2024-11-20T11:17:38.999Z] ====================================== 00:07:55.883 [2024-11-20T11:17:38.999Z] poller_cost: 534 (cyc), 233 (nsec) 00:07:55.883 00:07:55.883 real 0m1.609s 00:07:55.883 user 0m1.393s 00:07:55.883 sys 0m0.104s 00:07:55.883 ************************************ 00:07:55.883 END TEST thread_poller_perf 00:07:55.883 ************************************ 00:07:55.883 11:17:38 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.883 11:17:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:55.883 11:17:38 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:55.883 ************************************ 00:07:55.883 END TEST thread 00:07:55.883 ************************************ 00:07:55.883 00:07:55.883 real 0m3.593s 00:07:55.883 user 0m2.978s 00:07:55.883 sys 0m0.402s 00:07:55.883 11:17:38 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.883 11:17:38 thread -- common/autotest_common.sh@10 -- # set +x 00:07:55.883 11:17:38 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:55.883 11:17:38 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:55.883 11:17:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.883 11:17:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.883 11:17:38 -- common/autotest_common.sh@10 -- # set +x 00:07:55.883 ************************************ 00:07:55.883 START TEST app_cmdline 00:07:55.883 ************************************ 00:07:55.883 11:17:38 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:55.883 * Looking for test storage... 00:07:55.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:55.883 11:17:38 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:55.883 11:17:38 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:55.883 11:17:38 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:56.143 11:17:39 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.143 11:17:39 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:56.143 11:17:39 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.143 11:17:39 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:56.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.143 --rc genhtml_branch_coverage=1 00:07:56.143 --rc genhtml_function_coverage=1 00:07:56.143 --rc genhtml_legend=1 00:07:56.143 --rc geninfo_all_blocks=1 00:07:56.143 --rc geninfo_unexecuted_blocks=1 00:07:56.143 00:07:56.143 ' 00:07:56.143 11:17:39 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:56.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.143 --rc genhtml_branch_coverage=1 00:07:56.143 --rc genhtml_function_coverage=1 00:07:56.143 --rc genhtml_legend=1 00:07:56.143 --rc geninfo_all_blocks=1 00:07:56.143 --rc geninfo_unexecuted_blocks=1 00:07:56.143 00:07:56.143 ' 00:07:56.143 11:17:39 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:56.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.143 --rc genhtml_branch_coverage=1 00:07:56.143 --rc genhtml_function_coverage=1 00:07:56.143 --rc genhtml_legend=1 00:07:56.143 --rc geninfo_all_blocks=1 00:07:56.143 --rc geninfo_unexecuted_blocks=1 00:07:56.143 00:07:56.143 ' 00:07:56.143 11:17:39 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:56.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.143 --rc genhtml_branch_coverage=1 00:07:56.143 --rc genhtml_function_coverage=1 00:07:56.143 --rc genhtml_legend=1 00:07:56.143 --rc geninfo_all_blocks=1 00:07:56.143 --rc geninfo_unexecuted_blocks=1 00:07:56.143 00:07:56.143 ' 00:07:56.143 11:17:39 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:56.143 11:17:39 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59915 00:07:56.143 11:17:39 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:56.143 11:17:39 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59915 00:07:56.143 11:17:39 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59915 ']' 00:07:56.143 11:17:39 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.143 11:17:39 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.143 11:17:39 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.143 11:17:39 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.143 11:17:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:56.143 [2024-11-20 11:17:39.204916] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:07:56.143 [2024-11-20 11:17:39.205562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59915 ] 00:07:56.403 [2024-11-20 11:17:39.367961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.403 [2024-11-20 11:17:39.496556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.375 11:17:40 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.375 11:17:40 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:57.375 11:17:40 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:57.645 { 00:07:57.645 "version": "SPDK v25.01-pre git sha1 0383e688b", 00:07:57.645 "fields": { 00:07:57.645 "major": 25, 00:07:57.645 "minor": 1, 00:07:57.645 "patch": 0, 00:07:57.645 "suffix": "-pre", 00:07:57.645 "commit": "0383e688b" 00:07:57.645 } 00:07:57.645 } 00:07:57.645 11:17:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:57.645 11:17:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:57.645 11:17:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:57.645 11:17:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:57.645 11:17:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:57.645 11:17:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:57.645 11:17:40 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.645 11:17:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:57.645 11:17:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:57.645 11:17:40 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.645 11:17:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:57.904 11:17:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:57.904 11:17:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:57.904 11:17:40 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:57.904 11:17:40 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:57.904 11:17:40 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:57.904 11:17:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.904 11:17:40 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:57.904 11:17:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.904 11:17:40 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:57.904 11:17:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.904 11:17:40 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:57.904 11:17:40 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:57.904 11:17:40 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:57.904 request: 00:07:57.904 { 00:07:57.904 "method": "env_dpdk_get_mem_stats", 00:07:57.904 "req_id": 1 00:07:57.904 } 00:07:57.904 Got JSON-RPC error response 00:07:57.904 response: 00:07:57.904 { 00:07:57.904 "code": -32601, 00:07:57.904 "message": "Method not found" 00:07:57.904 } 00:07:57.904 11:17:41 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:57.904 11:17:41 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:57.904 11:17:41 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:57.904 11:17:41 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:57.904 11:17:41 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59915 00:07:57.904 11:17:41 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59915 ']' 00:07:57.904 11:17:41 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59915 00:07:57.904 11:17:41 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:57.904 11:17:41 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:57.904 11:17:41 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59915 00:07:58.162 11:17:41 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:58.162 11:17:41 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:58.162 11:17:41 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59915' 00:07:58.162 killing process with pid 59915 00:07:58.162 11:17:41 app_cmdline -- common/autotest_common.sh@973 -- # kill 59915 00:07:58.162 11:17:41 app_cmdline -- common/autotest_common.sh@978 -- # wait 59915 00:08:00.751 00:08:00.751 real 0m4.905s 00:08:00.751 user 0m5.249s 00:08:00.751 sys 0m0.658s 00:08:00.751 11:17:43 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.751 11:17:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:00.751 ************************************ 00:08:00.751 END TEST app_cmdline 00:08:00.751 ************************************ 00:08:00.751 11:17:43 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:00.751 11:17:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.751 11:17:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.751 11:17:43 -- common/autotest_common.sh@10 -- # set +x 00:08:00.752 ************************************ 00:08:00.752 START TEST version 00:08:00.752 ************************************ 00:08:00.752 11:17:43 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:01.009 * Looking for test storage... 00:08:01.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:01.009 11:17:43 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:01.009 11:17:43 version -- common/autotest_common.sh@1693 -- # lcov --version 00:08:01.009 11:17:43 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:01.009 11:17:44 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:01.009 11:17:44 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.009 11:17:44 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.009 11:17:44 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.009 11:17:44 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.009 11:17:44 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.009 11:17:44 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.009 11:17:44 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.009 11:17:44 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.009 11:17:44 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.009 11:17:44 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.009 11:17:44 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.009 11:17:44 version -- scripts/common.sh@344 -- # case "$op" in 00:08:01.009 11:17:44 version -- scripts/common.sh@345 -- # : 1 00:08:01.009 11:17:44 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.009 11:17:44 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.009 11:17:44 version -- scripts/common.sh@365 -- # decimal 1 00:08:01.009 11:17:44 version -- scripts/common.sh@353 -- # local d=1 00:08:01.009 11:17:44 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.009 11:17:44 version -- scripts/common.sh@355 -- # echo 1 00:08:01.009 11:17:44 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.009 11:17:44 version -- scripts/common.sh@366 -- # decimal 2 00:08:01.009 11:17:44 version -- scripts/common.sh@353 -- # local d=2 00:08:01.009 11:17:44 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.010 11:17:44 version -- scripts/common.sh@355 -- # echo 2 00:08:01.010 11:17:44 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.010 11:17:44 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.010 11:17:44 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.010 11:17:44 version -- scripts/common.sh@368 -- # return 0 00:08:01.010 11:17:44 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.010 11:17:44 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:01.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.010 --rc genhtml_branch_coverage=1 00:08:01.010 --rc genhtml_function_coverage=1 00:08:01.010 --rc genhtml_legend=1 00:08:01.010 --rc geninfo_all_blocks=1 00:08:01.010 --rc geninfo_unexecuted_blocks=1 00:08:01.010 00:08:01.010 ' 00:08:01.010 11:17:44 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:01.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.010 --rc genhtml_branch_coverage=1 00:08:01.010 --rc genhtml_function_coverage=1 00:08:01.010 --rc genhtml_legend=1 00:08:01.010 --rc geninfo_all_blocks=1 00:08:01.010 --rc geninfo_unexecuted_blocks=1 00:08:01.010 00:08:01.010 ' 00:08:01.010 11:17:44 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:01.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.010 --rc genhtml_branch_coverage=1 00:08:01.010 --rc genhtml_function_coverage=1 00:08:01.010 --rc genhtml_legend=1 00:08:01.010 --rc geninfo_all_blocks=1 00:08:01.010 --rc geninfo_unexecuted_blocks=1 00:08:01.010 00:08:01.010 ' 00:08:01.010 11:17:44 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:01.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.010 --rc genhtml_branch_coverage=1 00:08:01.010 --rc genhtml_function_coverage=1 00:08:01.010 --rc genhtml_legend=1 00:08:01.010 --rc geninfo_all_blocks=1 00:08:01.010 --rc geninfo_unexecuted_blocks=1 00:08:01.010 00:08:01.010 ' 00:08:01.010 11:17:44 version -- app/version.sh@17 -- # get_header_version major 00:08:01.010 11:17:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:01.010 11:17:44 version -- app/version.sh@14 -- # tr -d '"' 00:08:01.010 11:17:44 version -- app/version.sh@14 -- # cut -f2 00:08:01.010 11:17:44 version -- app/version.sh@17 -- # major=25 00:08:01.010 11:17:44 version -- app/version.sh@18 -- # get_header_version minor 00:08:01.010 11:17:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:01.010 11:17:44 version -- app/version.sh@14 -- # cut -f2 00:08:01.010 11:17:44 version -- app/version.sh@14 -- # tr -d '"' 00:08:01.010 11:17:44 version -- app/version.sh@18 -- # minor=1 00:08:01.010 11:17:44 version -- app/version.sh@19 -- # get_header_version patch 00:08:01.010 11:17:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:01.010 11:17:44 version -- app/version.sh@14 -- # cut -f2 00:08:01.010 11:17:44 version -- app/version.sh@14 -- # tr -d '"' 00:08:01.010 11:17:44 version -- app/version.sh@19 -- # patch=0 00:08:01.010 11:17:44 version -- app/version.sh@20 -- # get_header_version suffix 00:08:01.010 11:17:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:01.010 11:17:44 version -- app/version.sh@14 -- # cut -f2 00:08:01.010 11:17:44 version -- app/version.sh@14 -- # tr -d '"' 00:08:01.010 11:17:44 version -- app/version.sh@20 -- # suffix=-pre 00:08:01.010 11:17:44 version -- app/version.sh@22 -- # version=25.1 00:08:01.010 11:17:44 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:01.010 11:17:44 version -- app/version.sh@28 -- # version=25.1rc0 00:08:01.010 11:17:44 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:01.010 11:17:44 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:01.270 11:17:44 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:01.270 11:17:44 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:01.270 ************************************ 00:08:01.270 END TEST version 00:08:01.270 ************************************ 00:08:01.270 00:08:01.270 real 0m0.318s 00:08:01.270 user 0m0.198s 00:08:01.270 sys 0m0.179s 00:08:01.270 11:17:44 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.270 11:17:44 version -- common/autotest_common.sh@10 -- # set +x 00:08:01.270 11:17:44 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:01.270 11:17:44 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:08:01.270 11:17:44 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:01.270 11:17:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.270 11:17:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.270 11:17:44 -- common/autotest_common.sh@10 -- # set +x 00:08:01.270 ************************************ 00:08:01.270 START TEST bdev_raid 00:08:01.270 ************************************ 00:08:01.270 11:17:44 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:01.270 * Looking for test storage... 00:08:01.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:01.270 11:17:44 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:01.270 11:17:44 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:08:01.270 11:17:44 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:01.528 11:17:44 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@345 -- # : 1 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.529 11:17:44 bdev_raid -- scripts/common.sh@368 -- # return 0 00:08:01.529 11:17:44 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.529 11:17:44 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:01.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.529 --rc genhtml_branch_coverage=1 00:08:01.529 --rc genhtml_function_coverage=1 00:08:01.529 --rc genhtml_legend=1 00:08:01.529 --rc geninfo_all_blocks=1 00:08:01.529 --rc geninfo_unexecuted_blocks=1 00:08:01.529 00:08:01.529 ' 00:08:01.529 11:17:44 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:01.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.529 --rc genhtml_branch_coverage=1 00:08:01.529 --rc genhtml_function_coverage=1 00:08:01.529 --rc genhtml_legend=1 00:08:01.529 --rc geninfo_all_blocks=1 00:08:01.529 --rc geninfo_unexecuted_blocks=1 00:08:01.529 00:08:01.529 ' 00:08:01.529 11:17:44 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:01.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.529 --rc genhtml_branch_coverage=1 00:08:01.529 --rc genhtml_function_coverage=1 00:08:01.529 --rc genhtml_legend=1 00:08:01.529 --rc geninfo_all_blocks=1 00:08:01.529 --rc geninfo_unexecuted_blocks=1 00:08:01.529 00:08:01.529 ' 00:08:01.529 11:17:44 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:01.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.529 --rc genhtml_branch_coverage=1 00:08:01.529 --rc genhtml_function_coverage=1 00:08:01.529 --rc genhtml_legend=1 00:08:01.529 --rc geninfo_all_blocks=1 00:08:01.529 --rc geninfo_unexecuted_blocks=1 00:08:01.529 00:08:01.529 ' 00:08:01.529 11:17:44 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:01.529 11:17:44 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:08:01.529 11:17:44 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:08:01.529 11:17:44 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:08:01.529 11:17:44 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:08:01.529 11:17:44 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:08:01.529 11:17:44 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:08:01.529 11:17:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.529 11:17:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.529 11:17:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:01.529 ************************************ 00:08:01.529 START TEST raid1_resize_data_offset_test 00:08:01.529 ************************************ 00:08:01.529 Process raid pid: 60108 00:08:01.529 11:17:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:08:01.529 11:17:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60108 00:08:01.529 11:17:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:01.529 11:17:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60108' 00:08:01.529 11:17:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60108 00:08:01.529 11:17:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60108 ']' 00:08:01.529 11:17:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.529 11:17:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.529 11:17:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.529 11:17:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.529 11:17:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.529 [2024-11-20 11:17:44.571375] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:08:01.529 [2024-11-20 11:17:44.571647] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.788 [2024-11-20 11:17:44.752015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.788 [2024-11-20 11:17:44.883170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.064 [2024-11-20 11:17:45.120173] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.064 [2024-11-20 11:17:45.120314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.632 11:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.632 11:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:08:02.632 11:17:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:08:02.632 11:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.632 11:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.632 malloc0 00:08:02.632 11:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.632 11:17:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:08:02.632 11:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.632 11:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.632 malloc1 00:08:02.632 11:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.632 11:17:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:08:02.632 11:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.632 11:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.632 null0 00:08:02.632 11:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.632 11:17:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:08:02.633 11:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.633 11:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.633 [2024-11-20 11:17:45.661909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:08:02.633 [2024-11-20 11:17:45.663979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:02.633 [2024-11-20 11:17:45.664104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:08:02.633 [2024-11-20 11:17:45.664283] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:02.633 [2024-11-20 11:17:45.664301] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:08:02.633 [2024-11-20 11:17:45.664631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:02.633 [2024-11-20 11:17:45.664851] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:02.633 [2024-11-20 11:17:45.664867] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:02.633 [2024-11-20 11:17:45.665062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.633 11:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.633 11:17:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.633 11:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.633 11:17:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:02.633 11:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.633 11:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.633 11:17:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:08:02.633 11:17:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:08:02.633 11:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.633 11:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.633 [2024-11-20 11:17:45.725875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:08:02.633 11:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.633 11:17:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:08:02.633 11:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.633 11:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.263 malloc2 00:08:03.263 11:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.263 11:17:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:08:03.263 11:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.263 11:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.263 [2024-11-20 11:17:46.336000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:03.263 [2024-11-20 11:17:46.354916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:03.263 11:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.263 [2024-11-20 11:17:46.357085] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:08:03.263 11:17:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.263 11:17:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:03.263 11:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.263 11:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.523 11:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.523 11:17:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:08:03.523 11:17:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60108 00:08:03.523 11:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60108 ']' 00:08:03.523 11:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60108 00:08:03.523 11:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:08:03.523 11:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.523 11:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60108 00:08:03.523 killing process with pid 60108 00:08:03.523 11:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:03.523 11:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:03.523 11:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60108' 00:08:03.523 11:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60108 00:08:03.523 11:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60108 00:08:03.523 [2024-11-20 11:17:46.454086] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:03.523 [2024-11-20 11:17:46.455953] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:08:03.523 [2024-11-20 11:17:46.456102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.523 [2024-11-20 11:17:46.456126] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:08:03.523 [2024-11-20 11:17:46.497638] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:03.523 [2024-11-20 11:17:46.497969] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:03.523 [2024-11-20 11:17:46.497986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:05.498 [2024-11-20 11:17:48.604162] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.876 11:17:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:08:06.876 00:08:06.876 real 0m5.438s 00:08:06.876 user 0m5.403s 00:08:06.876 sys 0m0.549s 00:08:06.876 11:17:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.876 ************************************ 00:08:06.876 END TEST raid1_resize_data_offset_test 00:08:06.876 ************************************ 00:08:06.876 11:17:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.876 11:17:49 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:08:06.876 11:17:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:06.876 11:17:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.876 11:17:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.876 ************************************ 00:08:06.876 START TEST raid0_resize_superblock_test 00:08:06.876 ************************************ 00:08:06.876 11:17:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:08:06.876 11:17:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:08:06.876 11:17:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60199 00:08:06.876 Process raid pid: 60199 00:08:06.876 11:17:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60199' 00:08:06.876 11:17:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:06.876 11:17:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60199 00:08:06.876 11:17:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60199 ']' 00:08:06.876 11:17:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.876 11:17:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.876 11:17:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.876 11:17:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.876 11:17:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.134 [2024-11-20 11:17:50.080656] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:08:07.134 [2024-11-20 11:17:50.080909] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.392 [2024-11-20 11:17:50.258498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.392 [2024-11-20 11:17:50.392014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.651 [2024-11-20 11:17:50.625469] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.651 [2024-11-20 11:17:50.625604] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.260 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.260 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:08.260 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:08.260 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.260 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.828 malloc0 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.829 [2024-11-20 11:17:51.709887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:08.829 [2024-11-20 11:17:51.709966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.829 [2024-11-20 11:17:51.709990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:08.829 [2024-11-20 11:17:51.710003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.829 [2024-11-20 11:17:51.712404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.829 [2024-11-20 11:17:51.712565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:08.829 pt0 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.829 a6ad472d-f005-4e24-8efe-7cdb2e5840c8 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.829 9ee885a1-9afc-4471-8474-35deef7dda6f 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.829 d6db50b3-2208-45b9-97f3-46f5e129888f 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.829 [2024-11-20 11:17:51.843859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9ee885a1-9afc-4471-8474-35deef7dda6f is claimed 00:08:08.829 [2024-11-20 11:17:51.844105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d6db50b3-2208-45b9-97f3-46f5e129888f is claimed 00:08:08.829 [2024-11-20 11:17:51.844296] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:08.829 [2024-11-20 11:17:51.844317] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:08:08.829 [2024-11-20 11:17:51.844708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:08.829 [2024-11-20 11:17:51.844941] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:08.829 [2024-11-20 11:17:51.844963] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:08.829 [2024-11-20 11:17:51.845174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.829 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.090 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:09.090 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:08:09.090 [2024-11-20 11:17:51.943967] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.090 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.090 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:09.090 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:09.090 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:08:09.090 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:09.090 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.090 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.090 [2024-11-20 11:17:51.995991] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:09.090 [2024-11-20 11:17:51.996037] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '9ee885a1-9afc-4471-8474-35deef7dda6f' was resized: old size 131072, new size 204800 00:08:09.090 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.090 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:09.090 11:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.090 [2024-11-20 11:17:52.003835] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:09.090 [2024-11-20 11:17:52.003864] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'd6db50b3-2208-45b9-97f3-46f5e129888f' was resized: old size 131072, new size 204800 00:08:09.090 [2024-11-20 11:17:52.003899] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.090 [2024-11-20 11:17:52.123929] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.090 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.090 [2024-11-20 11:17:52.151758] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:09.090 [2024-11-20 11:17:52.151972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:09.090 [2024-11-20 11:17:52.152007] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:09.090 [2024-11-20 11:17:52.152034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:09.090 [2024-11-20 11:17:52.152244] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.091 [2024-11-20 11:17:52.152290] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:09.091 [2024-11-20 11:17:52.152305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:09.091 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.091 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:09.091 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.091 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.091 [2024-11-20 11:17:52.163591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:09.091 [2024-11-20 11:17:52.163672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.091 [2024-11-20 11:17:52.163698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:09.091 [2024-11-20 11:17:52.163711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.091 [2024-11-20 11:17:52.166359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.091 [2024-11-20 11:17:52.166466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:09.091 pt0 00:08:09.091 [2024-11-20 11:17:52.168747] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 9ee885a1-9afc-4471-8474-35deef7dda6f 00:08:09.091 [2024-11-20 11:17:52.168811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9ee885a1-9afc-4471-8474-35deef7dda6f is claimed 00:08:09.091 [2024-11-20 11:17:52.168957] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev d6db50b3-2208-45b9-97f3-46f5e129888f 00:08:09.091 [2024-11-20 11:17:52.168982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d6db50b3-2208-45b9-97f3-46f5e129888f is claimed 00:08:09.091 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.091 [2024-11-20 11:17:52.169120] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev d6db50b3-2208-45b9-97f3-46f5e129888f (2) smaller than existing raid bdev Raid (3) 00:08:09.091 [2024-11-20 11:17:52.169146] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 9ee885a1-9afc-4471-8474-35deef7dda6f: File exists 00:08:09.091 [2024-11-20 11:17:52.169194] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:09.091 [2024-11-20 11:17:52.169208] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:08:09.091 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:09.091 [2024-11-20 11:17:52.169501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:09.091 [2024-11-20 11:17:52.169680] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:09.091 [2024-11-20 11:17:52.169692] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:09.091 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.091 [2024-11-20 11:17:52.169878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.091 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.091 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.091 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:09.091 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:09.091 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:09.091 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:08:09.091 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.091 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.091 [2024-11-20 11:17:52.191979] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.350 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.350 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:09.350 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:09.350 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:08:09.350 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60199 00:08:09.350 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60199 ']' 00:08:09.350 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60199 00:08:09.350 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:09.350 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.350 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60199 00:08:09.350 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.350 killing process with pid 60199 00:08:09.350 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.350 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60199' 00:08:09.350 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60199 00:08:09.350 [2024-11-20 11:17:52.263921] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:09.350 [2024-11-20 11:17:52.264022] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.350 11:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60199 00:08:09.350 [2024-11-20 11:17:52.264079] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:09.350 [2024-11-20 11:17:52.264090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:10.725 [2024-11-20 11:17:53.784246] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:12.099 11:17:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:12.099 00:08:12.099 real 0m5.005s 00:08:12.099 user 0m5.350s 00:08:12.099 sys 0m0.597s 00:08:12.099 ************************************ 00:08:12.099 END TEST raid0_resize_superblock_test 00:08:12.099 ************************************ 00:08:12.099 11:17:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.099 11:17:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.099 11:17:55 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:08:12.099 11:17:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:12.099 11:17:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.099 11:17:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:12.099 ************************************ 00:08:12.099 START TEST raid1_resize_superblock_test 00:08:12.099 ************************************ 00:08:12.099 11:17:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:08:12.099 11:17:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:08:12.099 11:17:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60296 00:08:12.099 Process raid pid: 60296 00:08:12.099 11:17:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:12.099 11:17:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60296' 00:08:12.099 11:17:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60296 00:08:12.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.099 11:17:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60296 ']' 00:08:12.099 11:17:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.099 11:17:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.099 11:17:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.099 11:17:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.100 11:17:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.100 [2024-11-20 11:17:55.151730] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:08:12.100 [2024-11-20 11:17:55.151934] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.357 [2024-11-20 11:17:55.335002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.357 [2024-11-20 11:17:55.462749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.616 [2024-11-20 11:17:55.683752] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.616 [2024-11-20 11:17:55.683896] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.182 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.182 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:13.182 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:13.182 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.182 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.749 malloc0 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.749 [2024-11-20 11:17:56.616390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:13.749 [2024-11-20 11:17:56.616549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.749 [2024-11-20 11:17:56.616607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:13.749 [2024-11-20 11:17:56.616651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.749 [2024-11-20 11:17:56.619087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.749 [2024-11-20 11:17:56.619185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:13.749 pt0 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.749 0d94c7b9-4a96-479c-98b7-c6f7e66a45ee 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.749 d710ef84-d01f-4b20-b586-44ba03f3c3ca 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.749 1b92f7a2-cbcd-49e3-84b9-d52c9dd5999a 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.749 [2024-11-20 11:17:56.749959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d710ef84-d01f-4b20-b586-44ba03f3c3ca is claimed 00:08:13.749 [2024-11-20 11:17:56.750074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1b92f7a2-cbcd-49e3-84b9-d52c9dd5999a is claimed 00:08:13.749 [2024-11-20 11:17:56.750240] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:13.749 [2024-11-20 11:17:56.750259] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:08:13.749 [2024-11-20 11:17:56.750588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:13.749 [2024-11-20 11:17:56.750839] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:13.749 [2024-11-20 11:17:56.750859] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:13.749 [2024-11-20 11:17:56.751053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.749 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.008 [2024-11-20 11:17:56.866019] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.008 [2024-11-20 11:17:56.913916] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:14.008 [2024-11-20 11:17:56.913950] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'd710ef84-d01f-4b20-b586-44ba03f3c3ca' was resized: old size 131072, new size 204800 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.008 [2024-11-20 11:17:56.921820] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:14.008 [2024-11-20 11:17:56.921850] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '1b92f7a2-cbcd-49e3-84b9-d52c9dd5999a' was resized: old size 131072, new size 204800 00:08:14.008 [2024-11-20 11:17:56.921881] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.008 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.008 [2024-11-20 11:17:57.037781] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.008 [2024-11-20 11:17:57.085442] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:14.008 [2024-11-20 11:17:57.085607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:14.008 [2024-11-20 11:17:57.085660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:14.008 [2024-11-20 11:17:57.085882] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:14.008 [2024-11-20 11:17:57.086179] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:14.008 [2024-11-20 11:17:57.086304] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:14.008 [2024-11-20 11:17:57.086367] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.008 [2024-11-20 11:17:57.097308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:14.008 [2024-11-20 11:17:57.097442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.008 [2024-11-20 11:17:57.097502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:14.008 [2024-11-20 11:17:57.097544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.008 [2024-11-20 11:17:57.100030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.008 [2024-11-20 11:17:57.100136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:14.008 [2024-11-20 11:17:57.102173] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev d710ef84-d01f-4b20-b586-44ba03f3c3ca 00:08:14.008 [2024-11-20 11:17:57.102318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d710ef84-d01f-4b20-b586-44ba03f3c3ca is claimed 00:08:14.008 pt0 00:08:14.008 [2024-11-20 11:17:57.102526] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 1b92f7a2-cbcd-49e3-84b9-d52c9dd5999a 00:08:14.008 [2024-11-20 11:17:57.102554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1b92f7a2-cbcd-49e3-84b9-d52c9dd5999a is claimed 00:08:14.008 [2024-11-20 11:17:57.102697] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 1b92f7a2-cbcd-49e3-84b9-d52c9dd5999a (2) smaller than existing raid bdev Raid (3) 00:08:14.008 [2024-11-20 11:17:57.102778] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev d710ef84-d01f-4b20-b586-44ba03f3c3ca: File exists 00:08:14.008 [2024-11-20 11:17:57.102863] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:14.008 [2024-11-20 11:17:57.102902] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:14.008 [2024-11-20 11:17:57.103182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.008 [2024-11-20 11:17:57.103401] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:14.008 [2024-11-20 11:17:57.103411] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:14.008 [2024-11-20 11:17:57.103636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.008 11:17:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:14.009 11:17:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:08:14.266 [2024-11-20 11:17:57.125615] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.266 11:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.266 11:17:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:14.266 11:17:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:14.266 11:17:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:08:14.266 11:17:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60296 00:08:14.266 11:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60296 ']' 00:08:14.266 11:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60296 00:08:14.266 11:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:14.266 11:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.266 11:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60296 00:08:14.266 killing process with pid 60296 00:08:14.266 11:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:14.266 11:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:14.266 11:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60296' 00:08:14.266 11:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60296 00:08:14.266 [2024-11-20 11:17:57.194918] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:14.266 11:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60296 00:08:14.266 [2024-11-20 11:17:57.195016] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:14.266 [2024-11-20 11:17:57.195080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:14.266 [2024-11-20 11:17:57.195168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:15.678 [2024-11-20 11:17:58.749141] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:17.056 11:17:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:17.056 00:08:17.056 real 0m4.898s 00:08:17.056 user 0m5.166s 00:08:17.056 sys 0m0.567s 00:08:17.056 11:17:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.056 11:17:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.056 ************************************ 00:08:17.056 END TEST raid1_resize_superblock_test 00:08:17.056 ************************************ 00:08:17.056 11:18:00 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:08:17.056 11:18:00 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:08:17.056 11:18:00 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:08:17.056 11:18:00 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:08:17.056 11:18:00 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:08:17.056 11:18:00 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:08:17.056 11:18:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:17.056 11:18:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.056 11:18:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:17.056 ************************************ 00:08:17.056 START TEST raid_function_test_raid0 00:08:17.056 ************************************ 00:08:17.056 11:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:08:17.056 11:18:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:08:17.056 11:18:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:17.056 11:18:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:17.056 11:18:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60404 00:08:17.056 11:18:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:17.056 Process raid pid: 60404 00:08:17.056 11:18:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60404' 00:08:17.056 11:18:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60404 00:08:17.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.056 11:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60404 ']' 00:08:17.056 11:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.056 11:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.056 11:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.056 11:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.056 11:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:17.056 [2024-11-20 11:18:00.141052] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:08:17.056 [2024-11-20 11:18:00.141671] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.316 [2024-11-20 11:18:00.316900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.576 [2024-11-20 11:18:00.451469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.836 [2024-11-20 11:18:00.700100] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.836 [2024-11-20 11:18:00.700243] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.094 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.094 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:08:18.094 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:18.094 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.094 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:18.094 Base_1 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:18.095 Base_2 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:18.095 [2024-11-20 11:18:01.130809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:18.095 [2024-11-20 11:18:01.133021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:18.095 [2024-11-20 11:18:01.133163] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:18.095 [2024-11-20 11:18:01.133212] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:18.095 [2024-11-20 11:18:01.133589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:18.095 [2024-11-20 11:18:01.133807] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:18.095 [2024-11-20 11:18:01.133855] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:18.095 [2024-11-20 11:18:01.134102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:18.095 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:18.369 [2024-11-20 11:18:01.430423] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:18.369 /dev/nbd0 00:08:18.369 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:18.369 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:18.369 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:18.369 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:08:18.369 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:18.369 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:18.369 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:18.369 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:08:18.369 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:18.369 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:18.369 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:18.369 1+0 records in 00:08:18.369 1+0 records out 00:08:18.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300299 s, 13.6 MB/s 00:08:18.369 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.629 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:08:18.629 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.629 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:18.629 11:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:08:18.629 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:18.629 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:18.629 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:18.629 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:18.629 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:18.629 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:18.629 { 00:08:18.629 "nbd_device": "/dev/nbd0", 00:08:18.629 "bdev_name": "raid" 00:08:18.629 } 00:08:18.629 ]' 00:08:18.629 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:18.629 { 00:08:18.629 "nbd_device": "/dev/nbd0", 00:08:18.629 "bdev_name": "raid" 00:08:18.629 } 00:08:18.629 ]' 00:08:18.629 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:18.889 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:18.889 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:18.889 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:18.889 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:08:18.889 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:08:18.889 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:08:18.889 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:18.889 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:18.889 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:18.889 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:18.889 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:18.889 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:18.889 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:18.889 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:18.889 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:18.889 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:18.889 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:18.889 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:18.889 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:18.889 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:18.889 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:18.890 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:18.890 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:18.890 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:18.890 4096+0 records in 00:08:18.890 4096+0 records out 00:08:18.890 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0373713 s, 56.1 MB/s 00:08:18.890 11:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:19.149 4096+0 records in 00:08:19.149 4096+0 records out 00:08:19.149 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.245767 s, 8.5 MB/s 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:19.149 128+0 records in 00:08:19.149 128+0 records out 00:08:19.149 65536 bytes (66 kB, 64 KiB) copied, 0.00123121 s, 53.2 MB/s 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:19.149 2035+0 records in 00:08:19.149 2035+0 records out 00:08:19.149 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0145907 s, 71.4 MB/s 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:19.149 456+0 records in 00:08:19.149 456+0 records out 00:08:19.149 233472 bytes (233 kB, 228 KiB) copied, 0.00405996 s, 57.5 MB/s 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:19.149 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:19.407 [2024-11-20 11:18:02.513735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.407 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:19.665 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:19.665 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:19.665 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:19.665 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:19.665 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:19.665 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:08:19.665 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:08:19.665 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:19.665 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:19.665 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:19.665 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:19.665 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:19.665 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:19.925 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:19.925 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:19.925 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:19.925 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:08:19.925 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:08:19.925 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:19.925 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:08:19.925 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:19.925 11:18:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60404 00:08:19.925 11:18:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60404 ']' 00:08:19.925 11:18:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60404 00:08:19.925 11:18:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:08:19.925 11:18:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:19.925 11:18:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60404 00:08:19.925 killing process with pid 60404 00:08:19.925 11:18:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:19.925 11:18:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:19.925 11:18:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60404' 00:08:19.925 11:18:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60404 00:08:19.925 [2024-11-20 11:18:02.871239] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:19.925 [2024-11-20 11:18:02.871370] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:19.925 11:18:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60404 00:08:19.925 [2024-11-20 11:18:02.871423] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:19.925 [2024-11-20 11:18:02.871440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:20.184 [2024-11-20 11:18:03.098255] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.567 11:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:08:21.567 00:08:21.567 real 0m4.308s 00:08:21.567 user 0m5.118s 00:08:21.567 sys 0m1.031s 00:08:21.567 11:18:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.567 11:18:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:21.567 ************************************ 00:08:21.567 END TEST raid_function_test_raid0 00:08:21.567 ************************************ 00:08:21.567 11:18:04 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:08:21.567 11:18:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:21.567 11:18:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.567 11:18:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.567 ************************************ 00:08:21.567 START TEST raid_function_test_concat 00:08:21.567 ************************************ 00:08:21.567 11:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:08:21.567 11:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:08:21.567 11:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:21.567 11:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:21.567 11:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60539 00:08:21.567 Process raid pid: 60539 00:08:21.567 11:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60539' 00:08:21.567 11:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:21.567 11:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60539 00:08:21.568 11:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60539 ']' 00:08:21.568 11:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.568 11:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.568 11:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.568 11:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.568 11:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:21.568 [2024-11-20 11:18:04.506317] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:08:21.568 [2024-11-20 11:18:04.506473] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.829 [2024-11-20 11:18:04.688881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.829 [2024-11-20 11:18:04.823386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.088 [2024-11-20 11:18:05.040579] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.088 [2024-11-20 11:18:05.040630] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.347 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.347 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:08:22.347 11:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:22.347 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.347 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:22.347 Base_1 00:08:22.347 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.347 11:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:22.347 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.347 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:22.607 Base_2 00:08:22.607 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.607 11:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:08:22.607 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.607 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:22.607 [2024-11-20 11:18:05.491975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:22.607 [2024-11-20 11:18:05.493838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:22.607 [2024-11-20 11:18:05.493915] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:22.607 [2024-11-20 11:18:05.493927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:22.607 [2024-11-20 11:18:05.494188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:22.607 [2024-11-20 11:18:05.494349] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:22.607 [2024-11-20 11:18:05.494360] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:22.607 [2024-11-20 11:18:05.494541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.607 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.607 11:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:22.607 11:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:22.607 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.607 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:22.607 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.607 11:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:22.607 11:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:22.607 11:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:22.607 11:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:22.607 11:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:22.607 11:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:22.607 11:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:22.607 11:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:22.607 11:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:08:22.607 11:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:22.607 11:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:22.607 11:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:22.865 [2024-11-20 11:18:05.755705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:22.865 /dev/nbd0 00:08:22.865 11:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:22.865 11:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:22.865 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:22.865 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:08:22.865 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:22.865 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:22.865 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:22.865 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:08:22.865 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:22.865 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:22.865 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:22.865 1+0 records in 00:08:22.865 1+0 records out 00:08:22.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429035 s, 9.5 MB/s 00:08:22.865 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:22.866 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:08:22.866 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:22.866 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:22.866 11:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:08:22.866 11:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:22.866 11:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:22.866 11:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:22.866 11:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:22.866 11:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:23.124 { 00:08:23.124 "nbd_device": "/dev/nbd0", 00:08:23.124 "bdev_name": "raid" 00:08:23.124 } 00:08:23.124 ]' 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:23.124 { 00:08:23.124 "nbd_device": "/dev/nbd0", 00:08:23.124 "bdev_name": "raid" 00:08:23.124 } 00:08:23.124 ]' 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:23.124 4096+0 records in 00:08:23.124 4096+0 records out 00:08:23.124 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0324597 s, 64.6 MB/s 00:08:23.124 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:23.383 4096+0 records in 00:08:23.383 4096+0 records out 00:08:23.383 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.241709 s, 8.7 MB/s 00:08:23.383 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:23.383 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:23.383 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:23.383 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:23.383 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:23.383 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:23.383 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:23.383 128+0 records in 00:08:23.383 128+0 records out 00:08:23.383 65536 bytes (66 kB, 64 KiB) copied, 0.00113178 s, 57.9 MB/s 00:08:23.383 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:23.383 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:23.383 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:23.383 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:23.383 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:23.383 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:23.383 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:23.383 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:23.383 2035+0 records in 00:08:23.383 2035+0 records out 00:08:23.383 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0142206 s, 73.3 MB/s 00:08:23.383 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:23.383 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:23.383 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:23.383 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:23.383 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:23.383 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:23.383 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:23.383 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:23.383 456+0 records in 00:08:23.383 456+0 records out 00:08:23.383 233472 bytes (233 kB, 228 KiB) copied, 0.00362654 s, 64.4 MB/s 00:08:23.383 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:23.642 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:23.642 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:23.642 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:23.642 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:23.642 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:08:23.642 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:23.642 11:18:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:23.642 11:18:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:23.642 11:18:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:23.642 11:18:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:08:23.642 11:18:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:23.642 11:18:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:23.901 [2024-11-20 11:18:06.770137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.901 11:18:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:23.901 11:18:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:23.901 11:18:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:23.901 11:18:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:23.901 11:18:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:23.901 11:18:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:23.901 11:18:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:08:23.901 11:18:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:08:23.901 11:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:23.901 11:18:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:23.901 11:18:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:24.160 11:18:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:24.160 11:18:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:24.160 11:18:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:24.160 11:18:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:24.160 11:18:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:24.160 11:18:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:24.160 11:18:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:08:24.160 11:18:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:08:24.160 11:18:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:24.160 11:18:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:08:24.160 11:18:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:24.160 11:18:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60539 00:08:24.160 11:18:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60539 ']' 00:08:24.160 11:18:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60539 00:08:24.160 11:18:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:08:24.161 11:18:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.161 11:18:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60539 00:08:24.161 11:18:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.161 11:18:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.161 killing process with pid 60539 00:08:24.161 11:18:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60539' 00:08:24.161 11:18:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60539 00:08:24.161 [2024-11-20 11:18:07.142180] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:24.161 [2024-11-20 11:18:07.142300] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.161 11:18:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60539 00:08:24.161 [2024-11-20 11:18:07.142367] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.161 [2024-11-20 11:18:07.142381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:24.419 [2024-11-20 11:18:07.387238] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:25.796 11:18:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:08:25.796 00:08:25.796 real 0m4.244s 00:08:25.796 user 0m4.972s 00:08:25.796 sys 0m1.002s 00:08:25.796 11:18:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.796 11:18:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:25.796 ************************************ 00:08:25.796 END TEST raid_function_test_concat 00:08:25.796 ************************************ 00:08:25.796 11:18:08 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:08:25.796 11:18:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:25.796 11:18:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.796 11:18:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:25.796 ************************************ 00:08:25.796 START TEST raid0_resize_test 00:08:25.796 ************************************ 00:08:25.796 11:18:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:08:25.796 11:18:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:08:25.796 11:18:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:25.796 11:18:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:25.796 11:18:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:25.796 11:18:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:25.796 11:18:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:25.796 11:18:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:25.796 11:18:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:25.796 11:18:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60668 00:08:25.796 Process raid pid: 60668 00:08:25.796 11:18:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60668' 00:08:25.796 11:18:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60668 00:08:25.796 11:18:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60668 ']' 00:08:25.796 11:18:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.796 11:18:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.796 11:18:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.796 11:18:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.796 11:18:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.796 11:18:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:25.796 [2024-11-20 11:18:08.812772] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:08:25.796 [2024-11-20 11:18:08.812907] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.055 [2024-11-20 11:18:08.977319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.055 [2024-11-20 11:18:09.110627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.314 [2024-11-20 11:18:09.353645] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.314 [2024-11-20 11:18:09.353699] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.882 Base_1 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.882 Base_2 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.882 [2024-11-20 11:18:09.762284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:26.882 [2024-11-20 11:18:09.764438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:26.882 [2024-11-20 11:18:09.764564] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:26.882 [2024-11-20 11:18:09.764582] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:26.882 [2024-11-20 11:18:09.764900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:26.882 [2024-11-20 11:18:09.765059] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:26.882 [2024-11-20 11:18:09.765076] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:26.882 [2024-11-20 11:18:09.765278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.882 [2024-11-20 11:18:09.774244] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:26.882 [2024-11-20 11:18:09.774284] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:26.882 true 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.882 [2024-11-20 11:18:09.786414] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.882 [2024-11-20 11:18:09.834134] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:26.882 [2024-11-20 11:18:09.834173] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:26.882 [2024-11-20 11:18:09.834208] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:08:26.882 true 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:26.882 [2024-11-20 11:18:09.846341] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60668 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60668 ']' 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60668 00:08:26.882 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:08:26.883 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.883 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60668 00:08:26.883 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:26.883 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:26.883 killing process with pid 60668 00:08:26.883 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60668' 00:08:26.883 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60668 00:08:26.883 [2024-11-20 11:18:09.916358] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:26.883 [2024-11-20 11:18:09.916489] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:26.883 11:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60668 00:08:26.883 [2024-11-20 11:18:09.916549] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:26.883 [2024-11-20 11:18:09.916561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:26.883 [2024-11-20 11:18:09.936992] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:28.262 11:18:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:28.262 00:08:28.262 real 0m2.403s 00:08:28.262 user 0m2.574s 00:08:28.262 sys 0m0.370s 00:08:28.262 11:18:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.262 11:18:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.262 ************************************ 00:08:28.262 END TEST raid0_resize_test 00:08:28.262 ************************************ 00:08:28.262 11:18:11 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:08:28.262 11:18:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:28.262 11:18:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.262 11:18:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:28.262 ************************************ 00:08:28.262 START TEST raid1_resize_test 00:08:28.262 ************************************ 00:08:28.262 11:18:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:08:28.262 11:18:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:08:28.262 11:18:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:28.262 11:18:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:28.262 11:18:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:28.262 11:18:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:28.262 11:18:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:28.262 11:18:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:28.262 11:18:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:28.262 11:18:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60729 00:08:28.262 11:18:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:28.262 11:18:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60729' 00:08:28.262 Process raid pid: 60729 00:08:28.262 11:18:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60729 00:08:28.262 11:18:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60729 ']' 00:08:28.262 11:18:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.262 11:18:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.262 11:18:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.262 11:18:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.262 11:18:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.262 [2024-11-20 11:18:11.271997] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:08:28.262 [2024-11-20 11:18:11.272140] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.528 [2024-11-20 11:18:11.450456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.528 [2024-11-20 11:18:11.582987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.807 [2024-11-20 11:18:11.805514] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.807 [2024-11-20 11:18:11.805569] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.065 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.065 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:08:29.065 11:18:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:29.065 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.065 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.324 Base_1 00:08:29.324 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.324 11:18:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:29.324 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.324 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.324 Base_2 00:08:29.324 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.324 11:18:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:08:29.324 11:18:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:29.324 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.324 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.324 [2024-11-20 11:18:12.202768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:29.324 [2024-11-20 11:18:12.204879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:29.324 [2024-11-20 11:18:12.204967] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:29.324 [2024-11-20 11:18:12.204982] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:29.324 [2024-11-20 11:18:12.205302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:29.324 [2024-11-20 11:18:12.205471] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:29.324 [2024-11-20 11:18:12.205486] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:29.324 [2024-11-20 11:18:12.205672] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.325 [2024-11-20 11:18:12.214731] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:29.325 [2024-11-20 11:18:12.214771] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:29.325 true 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.325 [2024-11-20 11:18:12.230915] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.325 [2024-11-20 11:18:12.278637] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:29.325 [2024-11-20 11:18:12.278675] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:29.325 [2024-11-20 11:18:12.278710] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:08:29.325 true 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:29.325 [2024-11-20 11:18:12.294842] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60729 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60729 ']' 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60729 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60729 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:29.325 killing process with pid 60729 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60729' 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60729 00:08:29.325 [2024-11-20 11:18:12.384266] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:29.325 [2024-11-20 11:18:12.384383] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:29.325 11:18:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60729 00:08:29.325 [2024-11-20 11:18:12.384962] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:29.325 [2024-11-20 11:18:12.384996] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:29.325 [2024-11-20 11:18:12.405263] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:30.701 11:18:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:30.701 00:08:30.701 real 0m2.455s 00:08:30.701 user 0m2.654s 00:08:30.701 sys 0m0.357s 00:08:30.701 11:18:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.701 11:18:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.701 ************************************ 00:08:30.701 END TEST raid1_resize_test 00:08:30.701 ************************************ 00:08:30.701 11:18:13 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:30.701 11:18:13 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:30.701 11:18:13 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:08:30.701 11:18:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:30.701 11:18:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.701 11:18:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:30.701 ************************************ 00:08:30.701 START TEST raid_state_function_test 00:08:30.701 ************************************ 00:08:30.701 11:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:08:30.701 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:30.701 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:30.701 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:30.701 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:30.701 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:30.701 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:30.701 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:30.701 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:30.701 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:30.701 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:30.701 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:30.701 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:30.701 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:30.701 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:30.701 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:30.701 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:30.701 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:30.701 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:30.701 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:30.701 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:30.701 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:30.701 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:30.701 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:30.702 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60792 00:08:30.702 Process raid pid: 60792 00:08:30.702 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60792' 00:08:30.702 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60792 00:08:30.702 11:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60792 ']' 00:08:30.702 11:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.702 11:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.702 11:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.702 11:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.702 11:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.702 11:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:30.702 [2024-11-20 11:18:13.790504] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:08:30.702 [2024-11-20 11:18:13.790625] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.960 [2024-11-20 11:18:13.952358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.219 [2024-11-20 11:18:14.083982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.219 [2024-11-20 11:18:14.318555] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.219 [2024-11-20 11:18:14.318608] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.844 11:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.844 11:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:31.844 11:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:31.844 11:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.845 11:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.845 [2024-11-20 11:18:14.712832] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:31.845 [2024-11-20 11:18:14.712901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:31.845 [2024-11-20 11:18:14.712912] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:31.845 [2024-11-20 11:18:14.712922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:31.845 11:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.845 11:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:31.845 11:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.845 11:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.845 11:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.845 11:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.845 11:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:31.845 11:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.845 11:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.845 11:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.845 11:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.845 11:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.845 11:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.845 11:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.845 11:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.845 11:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.845 11:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.845 "name": "Existed_Raid", 00:08:31.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.845 "strip_size_kb": 64, 00:08:31.845 "state": "configuring", 00:08:31.845 "raid_level": "raid0", 00:08:31.845 "superblock": false, 00:08:31.845 "num_base_bdevs": 2, 00:08:31.845 "num_base_bdevs_discovered": 0, 00:08:31.845 "num_base_bdevs_operational": 2, 00:08:31.845 "base_bdevs_list": [ 00:08:31.845 { 00:08:31.845 "name": "BaseBdev1", 00:08:31.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.845 "is_configured": false, 00:08:31.845 "data_offset": 0, 00:08:31.845 "data_size": 0 00:08:31.845 }, 00:08:31.845 { 00:08:31.845 "name": "BaseBdev2", 00:08:31.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.845 "is_configured": false, 00:08:31.845 "data_offset": 0, 00:08:31.845 "data_size": 0 00:08:31.845 } 00:08:31.845 ] 00:08:31.845 }' 00:08:31.845 11:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.845 11:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.102 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:32.102 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.102 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.102 [2024-11-20 11:18:15.180030] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:32.102 [2024-11-20 11:18:15.180083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:32.102 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.102 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:32.102 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.102 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.102 [2024-11-20 11:18:15.192001] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:32.103 [2024-11-20 11:18:15.192053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:32.103 [2024-11-20 11:18:15.192064] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.103 [2024-11-20 11:18:15.192078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.103 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.103 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:32.103 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.103 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.362 [2024-11-20 11:18:15.240087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.362 BaseBdev1 00:08:32.362 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.362 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:32.362 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:32.362 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:32.362 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:32.362 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:32.362 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:32.362 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:32.362 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.362 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.362 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.362 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:32.362 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.362 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.362 [ 00:08:32.362 { 00:08:32.362 "name": "BaseBdev1", 00:08:32.362 "aliases": [ 00:08:32.362 "6ad6a1b5-51b5-4d14-af62-9611f74fe12a" 00:08:32.362 ], 00:08:32.362 "product_name": "Malloc disk", 00:08:32.362 "block_size": 512, 00:08:32.362 "num_blocks": 65536, 00:08:32.362 "uuid": "6ad6a1b5-51b5-4d14-af62-9611f74fe12a", 00:08:32.362 "assigned_rate_limits": { 00:08:32.362 "rw_ios_per_sec": 0, 00:08:32.362 "rw_mbytes_per_sec": 0, 00:08:32.362 "r_mbytes_per_sec": 0, 00:08:32.362 "w_mbytes_per_sec": 0 00:08:32.362 }, 00:08:32.362 "claimed": true, 00:08:32.362 "claim_type": "exclusive_write", 00:08:32.362 "zoned": false, 00:08:32.362 "supported_io_types": { 00:08:32.362 "read": true, 00:08:32.362 "write": true, 00:08:32.362 "unmap": true, 00:08:32.362 "flush": true, 00:08:32.362 "reset": true, 00:08:32.362 "nvme_admin": false, 00:08:32.362 "nvme_io": false, 00:08:32.362 "nvme_io_md": false, 00:08:32.362 "write_zeroes": true, 00:08:32.362 "zcopy": true, 00:08:32.362 "get_zone_info": false, 00:08:32.363 "zone_management": false, 00:08:32.363 "zone_append": false, 00:08:32.363 "compare": false, 00:08:32.363 "compare_and_write": false, 00:08:32.363 "abort": true, 00:08:32.363 "seek_hole": false, 00:08:32.363 "seek_data": false, 00:08:32.363 "copy": true, 00:08:32.363 "nvme_iov_md": false 00:08:32.363 }, 00:08:32.363 "memory_domains": [ 00:08:32.363 { 00:08:32.363 "dma_device_id": "system", 00:08:32.363 "dma_device_type": 1 00:08:32.363 }, 00:08:32.363 { 00:08:32.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.363 "dma_device_type": 2 00:08:32.363 } 00:08:32.363 ], 00:08:32.363 "driver_specific": {} 00:08:32.363 } 00:08:32.363 ] 00:08:32.363 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.363 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:32.363 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:32.363 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.363 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.363 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.363 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.363 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.363 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.363 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.363 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.363 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.363 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.363 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.363 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.363 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.363 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.363 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.363 "name": "Existed_Raid", 00:08:32.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.363 "strip_size_kb": 64, 00:08:32.363 "state": "configuring", 00:08:32.363 "raid_level": "raid0", 00:08:32.363 "superblock": false, 00:08:32.363 "num_base_bdevs": 2, 00:08:32.363 "num_base_bdevs_discovered": 1, 00:08:32.363 "num_base_bdevs_operational": 2, 00:08:32.363 "base_bdevs_list": [ 00:08:32.363 { 00:08:32.363 "name": "BaseBdev1", 00:08:32.363 "uuid": "6ad6a1b5-51b5-4d14-af62-9611f74fe12a", 00:08:32.363 "is_configured": true, 00:08:32.363 "data_offset": 0, 00:08:32.363 "data_size": 65536 00:08:32.363 }, 00:08:32.363 { 00:08:32.363 "name": "BaseBdev2", 00:08:32.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.363 "is_configured": false, 00:08:32.363 "data_offset": 0, 00:08:32.363 "data_size": 0 00:08:32.363 } 00:08:32.363 ] 00:08:32.363 }' 00:08:32.363 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.363 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.622 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:32.622 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.622 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.622 [2024-11-20 11:18:15.723614] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:32.623 [2024-11-20 11:18:15.723684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:32.623 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.623 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:32.623 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.623 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.623 [2024-11-20 11:18:15.735670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.882 [2024-11-20 11:18:15.737775] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.882 [2024-11-20 11:18:15.737826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.882 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.882 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:32.882 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:32.882 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:32.882 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.882 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.882 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.882 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.882 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.882 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.882 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.882 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.882 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.882 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.882 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.882 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.882 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.882 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.882 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.882 "name": "Existed_Raid", 00:08:32.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.882 "strip_size_kb": 64, 00:08:32.882 "state": "configuring", 00:08:32.882 "raid_level": "raid0", 00:08:32.882 "superblock": false, 00:08:32.882 "num_base_bdevs": 2, 00:08:32.882 "num_base_bdevs_discovered": 1, 00:08:32.882 "num_base_bdevs_operational": 2, 00:08:32.882 "base_bdevs_list": [ 00:08:32.882 { 00:08:32.882 "name": "BaseBdev1", 00:08:32.882 "uuid": "6ad6a1b5-51b5-4d14-af62-9611f74fe12a", 00:08:32.882 "is_configured": true, 00:08:32.882 "data_offset": 0, 00:08:32.882 "data_size": 65536 00:08:32.882 }, 00:08:32.882 { 00:08:32.882 "name": "BaseBdev2", 00:08:32.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.882 "is_configured": false, 00:08:32.882 "data_offset": 0, 00:08:32.882 "data_size": 0 00:08:32.882 } 00:08:32.882 ] 00:08:32.882 }' 00:08:32.882 11:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.883 11:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.142 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:33.142 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.143 [2024-11-20 11:18:16.209061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:33.143 [2024-11-20 11:18:16.209118] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:33.143 [2024-11-20 11:18:16.209127] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:33.143 [2024-11-20 11:18:16.209394] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:33.143 [2024-11-20 11:18:16.209606] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:33.143 [2024-11-20 11:18:16.209628] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:33.143 [2024-11-20 11:18:16.209924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.143 BaseBdev2 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.143 [ 00:08:33.143 { 00:08:33.143 "name": "BaseBdev2", 00:08:33.143 "aliases": [ 00:08:33.143 "99a1e486-d3b1-4c9d-b670-a298980fa7aa" 00:08:33.143 ], 00:08:33.143 "product_name": "Malloc disk", 00:08:33.143 "block_size": 512, 00:08:33.143 "num_blocks": 65536, 00:08:33.143 "uuid": "99a1e486-d3b1-4c9d-b670-a298980fa7aa", 00:08:33.143 "assigned_rate_limits": { 00:08:33.143 "rw_ios_per_sec": 0, 00:08:33.143 "rw_mbytes_per_sec": 0, 00:08:33.143 "r_mbytes_per_sec": 0, 00:08:33.143 "w_mbytes_per_sec": 0 00:08:33.143 }, 00:08:33.143 "claimed": true, 00:08:33.143 "claim_type": "exclusive_write", 00:08:33.143 "zoned": false, 00:08:33.143 "supported_io_types": { 00:08:33.143 "read": true, 00:08:33.143 "write": true, 00:08:33.143 "unmap": true, 00:08:33.143 "flush": true, 00:08:33.143 "reset": true, 00:08:33.143 "nvme_admin": false, 00:08:33.143 "nvme_io": false, 00:08:33.143 "nvme_io_md": false, 00:08:33.143 "write_zeroes": true, 00:08:33.143 "zcopy": true, 00:08:33.143 "get_zone_info": false, 00:08:33.143 "zone_management": false, 00:08:33.143 "zone_append": false, 00:08:33.143 "compare": false, 00:08:33.143 "compare_and_write": false, 00:08:33.143 "abort": true, 00:08:33.143 "seek_hole": false, 00:08:33.143 "seek_data": false, 00:08:33.143 "copy": true, 00:08:33.143 "nvme_iov_md": false 00:08:33.143 }, 00:08:33.143 "memory_domains": [ 00:08:33.143 { 00:08:33.143 "dma_device_id": "system", 00:08:33.143 "dma_device_type": 1 00:08:33.143 }, 00:08:33.143 { 00:08:33.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.143 "dma_device_type": 2 00:08:33.143 } 00:08:33.143 ], 00:08:33.143 "driver_specific": {} 00:08:33.143 } 00:08:33.143 ] 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.143 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.403 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.403 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.403 "name": "Existed_Raid", 00:08:33.403 "uuid": "1870a774-b235-4ba9-87f7-aec014c770ab", 00:08:33.403 "strip_size_kb": 64, 00:08:33.403 "state": "online", 00:08:33.403 "raid_level": "raid0", 00:08:33.403 "superblock": false, 00:08:33.403 "num_base_bdevs": 2, 00:08:33.403 "num_base_bdevs_discovered": 2, 00:08:33.403 "num_base_bdevs_operational": 2, 00:08:33.403 "base_bdevs_list": [ 00:08:33.403 { 00:08:33.403 "name": "BaseBdev1", 00:08:33.403 "uuid": "6ad6a1b5-51b5-4d14-af62-9611f74fe12a", 00:08:33.403 "is_configured": true, 00:08:33.403 "data_offset": 0, 00:08:33.403 "data_size": 65536 00:08:33.403 }, 00:08:33.403 { 00:08:33.403 "name": "BaseBdev2", 00:08:33.403 "uuid": "99a1e486-d3b1-4c9d-b670-a298980fa7aa", 00:08:33.403 "is_configured": true, 00:08:33.403 "data_offset": 0, 00:08:33.403 "data_size": 65536 00:08:33.403 } 00:08:33.403 ] 00:08:33.403 }' 00:08:33.403 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.403 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.662 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:33.662 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:33.662 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:33.662 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:33.662 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:33.662 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:33.662 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:33.662 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:33.662 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.662 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.662 [2024-11-20 11:18:16.716674] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:33.662 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.662 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:33.662 "name": "Existed_Raid", 00:08:33.662 "aliases": [ 00:08:33.662 "1870a774-b235-4ba9-87f7-aec014c770ab" 00:08:33.662 ], 00:08:33.662 "product_name": "Raid Volume", 00:08:33.662 "block_size": 512, 00:08:33.662 "num_blocks": 131072, 00:08:33.662 "uuid": "1870a774-b235-4ba9-87f7-aec014c770ab", 00:08:33.662 "assigned_rate_limits": { 00:08:33.662 "rw_ios_per_sec": 0, 00:08:33.662 "rw_mbytes_per_sec": 0, 00:08:33.662 "r_mbytes_per_sec": 0, 00:08:33.662 "w_mbytes_per_sec": 0 00:08:33.662 }, 00:08:33.662 "claimed": false, 00:08:33.662 "zoned": false, 00:08:33.662 "supported_io_types": { 00:08:33.662 "read": true, 00:08:33.662 "write": true, 00:08:33.662 "unmap": true, 00:08:33.662 "flush": true, 00:08:33.662 "reset": true, 00:08:33.662 "nvme_admin": false, 00:08:33.662 "nvme_io": false, 00:08:33.662 "nvme_io_md": false, 00:08:33.662 "write_zeroes": true, 00:08:33.662 "zcopy": false, 00:08:33.662 "get_zone_info": false, 00:08:33.662 "zone_management": false, 00:08:33.662 "zone_append": false, 00:08:33.662 "compare": false, 00:08:33.662 "compare_and_write": false, 00:08:33.662 "abort": false, 00:08:33.662 "seek_hole": false, 00:08:33.662 "seek_data": false, 00:08:33.662 "copy": false, 00:08:33.662 "nvme_iov_md": false 00:08:33.662 }, 00:08:33.662 "memory_domains": [ 00:08:33.662 { 00:08:33.662 "dma_device_id": "system", 00:08:33.662 "dma_device_type": 1 00:08:33.662 }, 00:08:33.662 { 00:08:33.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.662 "dma_device_type": 2 00:08:33.662 }, 00:08:33.662 { 00:08:33.662 "dma_device_id": "system", 00:08:33.662 "dma_device_type": 1 00:08:33.662 }, 00:08:33.662 { 00:08:33.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.662 "dma_device_type": 2 00:08:33.662 } 00:08:33.662 ], 00:08:33.662 "driver_specific": { 00:08:33.662 "raid": { 00:08:33.662 "uuid": "1870a774-b235-4ba9-87f7-aec014c770ab", 00:08:33.662 "strip_size_kb": 64, 00:08:33.662 "state": "online", 00:08:33.662 "raid_level": "raid0", 00:08:33.662 "superblock": false, 00:08:33.662 "num_base_bdevs": 2, 00:08:33.662 "num_base_bdevs_discovered": 2, 00:08:33.662 "num_base_bdevs_operational": 2, 00:08:33.662 "base_bdevs_list": [ 00:08:33.662 { 00:08:33.662 "name": "BaseBdev1", 00:08:33.663 "uuid": "6ad6a1b5-51b5-4d14-af62-9611f74fe12a", 00:08:33.663 "is_configured": true, 00:08:33.663 "data_offset": 0, 00:08:33.663 "data_size": 65536 00:08:33.663 }, 00:08:33.663 { 00:08:33.663 "name": "BaseBdev2", 00:08:33.663 "uuid": "99a1e486-d3b1-4c9d-b670-a298980fa7aa", 00:08:33.663 "is_configured": true, 00:08:33.663 "data_offset": 0, 00:08:33.663 "data_size": 65536 00:08:33.663 } 00:08:33.663 ] 00:08:33.663 } 00:08:33.663 } 00:08:33.663 }' 00:08:33.663 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:33.923 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:33.923 BaseBdev2' 00:08:33.923 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.923 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:33.923 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.923 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:33.923 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.923 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.923 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.923 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.923 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.923 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.923 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.923 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:33.923 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.923 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.923 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.923 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.923 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.923 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.923 11:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:33.923 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.923 11:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.923 [2024-11-20 11:18:16.924034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:33.923 [2024-11-20 11:18:16.924127] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:33.923 [2024-11-20 11:18:16.924220] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.923 11:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.923 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:33.923 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:33.923 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:33.923 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:33.923 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:33.923 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:33.923 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.923 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:33.923 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.923 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.923 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:33.923 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.923 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.183 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.183 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.183 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.183 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.183 11:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.183 11:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.183 11:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.183 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.183 "name": "Existed_Raid", 00:08:34.183 "uuid": "1870a774-b235-4ba9-87f7-aec014c770ab", 00:08:34.183 "strip_size_kb": 64, 00:08:34.183 "state": "offline", 00:08:34.183 "raid_level": "raid0", 00:08:34.183 "superblock": false, 00:08:34.183 "num_base_bdevs": 2, 00:08:34.183 "num_base_bdevs_discovered": 1, 00:08:34.183 "num_base_bdevs_operational": 1, 00:08:34.183 "base_bdevs_list": [ 00:08:34.183 { 00:08:34.183 "name": null, 00:08:34.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.183 "is_configured": false, 00:08:34.183 "data_offset": 0, 00:08:34.183 "data_size": 65536 00:08:34.183 }, 00:08:34.183 { 00:08:34.183 "name": "BaseBdev2", 00:08:34.183 "uuid": "99a1e486-d3b1-4c9d-b670-a298980fa7aa", 00:08:34.183 "is_configured": true, 00:08:34.183 "data_offset": 0, 00:08:34.183 "data_size": 65536 00:08:34.183 } 00:08:34.183 ] 00:08:34.183 }' 00:08:34.183 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.183 11:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.443 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:34.443 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:34.443 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.443 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:34.443 11:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.443 11:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.443 11:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.443 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:34.443 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:34.443 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:34.443 11:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.443 11:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.703 [2024-11-20 11:18:17.560915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:34.703 [2024-11-20 11:18:17.561036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:34.703 11:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.703 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:34.703 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:34.703 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.703 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:34.703 11:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.703 11:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.703 11:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.703 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:34.703 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:34.703 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:34.703 11:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60792 00:08:34.703 11:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60792 ']' 00:08:34.703 11:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60792 00:08:34.703 11:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:34.703 11:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.703 11:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60792 00:08:34.703 killing process with pid 60792 00:08:34.703 11:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.703 11:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.703 11:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60792' 00:08:34.703 11:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60792 00:08:34.703 [2024-11-20 11:18:17.774394] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:34.703 11:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60792 00:08:34.703 [2024-11-20 11:18:17.794352] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:36.084 11:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:36.084 00:08:36.084 real 0m5.330s 00:08:36.084 user 0m7.670s 00:08:36.084 sys 0m0.885s 00:08:36.084 11:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.084 ************************************ 00:08:36.084 END TEST raid_state_function_test 00:08:36.084 ************************************ 00:08:36.084 11:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.084 11:18:19 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:08:36.084 11:18:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:36.085 11:18:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.085 11:18:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:36.085 ************************************ 00:08:36.085 START TEST raid_state_function_test_sb 00:08:36.085 ************************************ 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:36.085 Process raid pid: 61045 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61045 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61045' 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61045 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61045 ']' 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.085 11:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.344 [2024-11-20 11:18:19.200348] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:08:36.344 [2024-11-20 11:18:19.200506] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.344 [2024-11-20 11:18:19.375563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.602 [2024-11-20 11:18:19.491939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.602 [2024-11-20 11:18:19.708834] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.602 [2024-11-20 11:18:19.708867] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.171 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.171 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:37.171 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:37.171 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.171 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.171 [2024-11-20 11:18:20.057202] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:37.171 [2024-11-20 11:18:20.057310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:37.171 [2024-11-20 11:18:20.057342] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:37.171 [2024-11-20 11:18:20.057367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:37.171 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.171 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:37.171 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.171 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.171 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.171 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.171 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:37.171 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.171 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.171 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.171 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.171 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.171 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.171 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.171 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.171 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.171 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.171 "name": "Existed_Raid", 00:08:37.171 "uuid": "87078477-f618-4c28-9037-6033e1ce65e4", 00:08:37.171 "strip_size_kb": 64, 00:08:37.171 "state": "configuring", 00:08:37.171 "raid_level": "raid0", 00:08:37.171 "superblock": true, 00:08:37.171 "num_base_bdevs": 2, 00:08:37.171 "num_base_bdevs_discovered": 0, 00:08:37.171 "num_base_bdevs_operational": 2, 00:08:37.171 "base_bdevs_list": [ 00:08:37.171 { 00:08:37.171 "name": "BaseBdev1", 00:08:37.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.171 "is_configured": false, 00:08:37.171 "data_offset": 0, 00:08:37.171 "data_size": 0 00:08:37.171 }, 00:08:37.171 { 00:08:37.171 "name": "BaseBdev2", 00:08:37.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.171 "is_configured": false, 00:08:37.171 "data_offset": 0, 00:08:37.171 "data_size": 0 00:08:37.171 } 00:08:37.171 ] 00:08:37.171 }' 00:08:37.171 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.171 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.430 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:37.430 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.430 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.430 [2024-11-20 11:18:20.504403] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:37.430 [2024-11-20 11:18:20.504515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:37.430 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.430 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:37.430 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.430 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.430 [2024-11-20 11:18:20.512376] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:37.430 [2024-11-20 11:18:20.512475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:37.430 [2024-11-20 11:18:20.512513] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:37.430 [2024-11-20 11:18:20.512544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:37.430 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.430 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:37.430 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.430 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.689 [2024-11-20 11:18:20.563821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.689 BaseBdev1 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.689 [ 00:08:37.689 { 00:08:37.689 "name": "BaseBdev1", 00:08:37.689 "aliases": [ 00:08:37.689 "12bae8b8-bd1e-4cfc-94a8-7d9356aafc82" 00:08:37.689 ], 00:08:37.689 "product_name": "Malloc disk", 00:08:37.689 "block_size": 512, 00:08:37.689 "num_blocks": 65536, 00:08:37.689 "uuid": "12bae8b8-bd1e-4cfc-94a8-7d9356aafc82", 00:08:37.689 "assigned_rate_limits": { 00:08:37.689 "rw_ios_per_sec": 0, 00:08:37.689 "rw_mbytes_per_sec": 0, 00:08:37.689 "r_mbytes_per_sec": 0, 00:08:37.689 "w_mbytes_per_sec": 0 00:08:37.689 }, 00:08:37.689 "claimed": true, 00:08:37.689 "claim_type": "exclusive_write", 00:08:37.689 "zoned": false, 00:08:37.689 "supported_io_types": { 00:08:37.689 "read": true, 00:08:37.689 "write": true, 00:08:37.689 "unmap": true, 00:08:37.689 "flush": true, 00:08:37.689 "reset": true, 00:08:37.689 "nvme_admin": false, 00:08:37.689 "nvme_io": false, 00:08:37.689 "nvme_io_md": false, 00:08:37.689 "write_zeroes": true, 00:08:37.689 "zcopy": true, 00:08:37.689 "get_zone_info": false, 00:08:37.689 "zone_management": false, 00:08:37.689 "zone_append": false, 00:08:37.689 "compare": false, 00:08:37.689 "compare_and_write": false, 00:08:37.689 "abort": true, 00:08:37.689 "seek_hole": false, 00:08:37.689 "seek_data": false, 00:08:37.689 "copy": true, 00:08:37.689 "nvme_iov_md": false 00:08:37.689 }, 00:08:37.689 "memory_domains": [ 00:08:37.689 { 00:08:37.689 "dma_device_id": "system", 00:08:37.689 "dma_device_type": 1 00:08:37.689 }, 00:08:37.689 { 00:08:37.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.689 "dma_device_type": 2 00:08:37.689 } 00:08:37.689 ], 00:08:37.689 "driver_specific": {} 00:08:37.689 } 00:08:37.689 ] 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.689 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.689 "name": "Existed_Raid", 00:08:37.689 "uuid": "444da2ee-9a13-4a1c-9773-1df6ade04ca5", 00:08:37.689 "strip_size_kb": 64, 00:08:37.689 "state": "configuring", 00:08:37.689 "raid_level": "raid0", 00:08:37.689 "superblock": true, 00:08:37.689 "num_base_bdevs": 2, 00:08:37.689 "num_base_bdevs_discovered": 1, 00:08:37.689 "num_base_bdevs_operational": 2, 00:08:37.689 "base_bdevs_list": [ 00:08:37.689 { 00:08:37.689 "name": "BaseBdev1", 00:08:37.689 "uuid": "12bae8b8-bd1e-4cfc-94a8-7d9356aafc82", 00:08:37.689 "is_configured": true, 00:08:37.689 "data_offset": 2048, 00:08:37.689 "data_size": 63488 00:08:37.689 }, 00:08:37.690 { 00:08:37.690 "name": "BaseBdev2", 00:08:37.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.690 "is_configured": false, 00:08:37.690 "data_offset": 0, 00:08:37.690 "data_size": 0 00:08:37.690 } 00:08:37.690 ] 00:08:37.690 }' 00:08:37.690 11:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.690 11:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.948 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:37.948 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.948 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.948 [2024-11-20 11:18:21.035195] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:37.948 [2024-11-20 11:18:21.035310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:37.948 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.948 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:37.948 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.948 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.948 [2024-11-20 11:18:21.047232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.948 [2024-11-20 11:18:21.049321] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:37.948 [2024-11-20 11:18:21.049371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:37.948 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.948 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:37.948 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:37.948 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:37.948 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.948 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.948 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.948 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.949 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:37.949 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.949 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.949 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.949 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.949 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.949 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.949 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.949 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.207 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.207 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.207 "name": "Existed_Raid", 00:08:38.207 "uuid": "c60c76d5-b08f-4137-ae31-93da11093197", 00:08:38.207 "strip_size_kb": 64, 00:08:38.208 "state": "configuring", 00:08:38.208 "raid_level": "raid0", 00:08:38.208 "superblock": true, 00:08:38.208 "num_base_bdevs": 2, 00:08:38.208 "num_base_bdevs_discovered": 1, 00:08:38.208 "num_base_bdevs_operational": 2, 00:08:38.208 "base_bdevs_list": [ 00:08:38.208 { 00:08:38.208 "name": "BaseBdev1", 00:08:38.208 "uuid": "12bae8b8-bd1e-4cfc-94a8-7d9356aafc82", 00:08:38.208 "is_configured": true, 00:08:38.208 "data_offset": 2048, 00:08:38.208 "data_size": 63488 00:08:38.208 }, 00:08:38.208 { 00:08:38.208 "name": "BaseBdev2", 00:08:38.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.208 "is_configured": false, 00:08:38.208 "data_offset": 0, 00:08:38.208 "data_size": 0 00:08:38.208 } 00:08:38.208 ] 00:08:38.208 }' 00:08:38.208 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.208 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.465 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:38.465 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.465 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.465 [2024-11-20 11:18:21.572598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:38.465 [2024-11-20 11:18:21.572999] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:38.465 [2024-11-20 11:18:21.573059] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:38.465 [2024-11-20 11:18:21.573367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:38.465 [2024-11-20 11:18:21.573586] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:38.465 BaseBdev2 00:08:38.465 [2024-11-20 11:18:21.573649] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:38.465 [2024-11-20 11:18:21.573860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.465 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.465 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:38.465 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:38.465 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:38.465 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:38.465 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:38.465 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:38.465 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:38.465 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.465 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.725 [ 00:08:38.725 { 00:08:38.725 "name": "BaseBdev2", 00:08:38.725 "aliases": [ 00:08:38.725 "68c72698-c68c-4877-8d01-68ee2aa540e0" 00:08:38.725 ], 00:08:38.725 "product_name": "Malloc disk", 00:08:38.725 "block_size": 512, 00:08:38.725 "num_blocks": 65536, 00:08:38.725 "uuid": "68c72698-c68c-4877-8d01-68ee2aa540e0", 00:08:38.725 "assigned_rate_limits": { 00:08:38.725 "rw_ios_per_sec": 0, 00:08:38.725 "rw_mbytes_per_sec": 0, 00:08:38.725 "r_mbytes_per_sec": 0, 00:08:38.725 "w_mbytes_per_sec": 0 00:08:38.725 }, 00:08:38.725 "claimed": true, 00:08:38.725 "claim_type": "exclusive_write", 00:08:38.725 "zoned": false, 00:08:38.725 "supported_io_types": { 00:08:38.725 "read": true, 00:08:38.725 "write": true, 00:08:38.725 "unmap": true, 00:08:38.725 "flush": true, 00:08:38.725 "reset": true, 00:08:38.725 "nvme_admin": false, 00:08:38.725 "nvme_io": false, 00:08:38.725 "nvme_io_md": false, 00:08:38.725 "write_zeroes": true, 00:08:38.725 "zcopy": true, 00:08:38.725 "get_zone_info": false, 00:08:38.725 "zone_management": false, 00:08:38.725 "zone_append": false, 00:08:38.725 "compare": false, 00:08:38.725 "compare_and_write": false, 00:08:38.725 "abort": true, 00:08:38.725 "seek_hole": false, 00:08:38.725 "seek_data": false, 00:08:38.725 "copy": true, 00:08:38.725 "nvme_iov_md": false 00:08:38.725 }, 00:08:38.725 "memory_domains": [ 00:08:38.725 { 00:08:38.725 "dma_device_id": "system", 00:08:38.725 "dma_device_type": 1 00:08:38.725 }, 00:08:38.725 { 00:08:38.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.725 "dma_device_type": 2 00:08:38.725 } 00:08:38.725 ], 00:08:38.725 "driver_specific": {} 00:08:38.725 } 00:08:38.725 ] 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.725 "name": "Existed_Raid", 00:08:38.725 "uuid": "c60c76d5-b08f-4137-ae31-93da11093197", 00:08:38.725 "strip_size_kb": 64, 00:08:38.725 "state": "online", 00:08:38.725 "raid_level": "raid0", 00:08:38.725 "superblock": true, 00:08:38.725 "num_base_bdevs": 2, 00:08:38.725 "num_base_bdevs_discovered": 2, 00:08:38.725 "num_base_bdevs_operational": 2, 00:08:38.725 "base_bdevs_list": [ 00:08:38.725 { 00:08:38.725 "name": "BaseBdev1", 00:08:38.725 "uuid": "12bae8b8-bd1e-4cfc-94a8-7d9356aafc82", 00:08:38.725 "is_configured": true, 00:08:38.725 "data_offset": 2048, 00:08:38.725 "data_size": 63488 00:08:38.725 }, 00:08:38.725 { 00:08:38.725 "name": "BaseBdev2", 00:08:38.725 "uuid": "68c72698-c68c-4877-8d01-68ee2aa540e0", 00:08:38.725 "is_configured": true, 00:08:38.725 "data_offset": 2048, 00:08:38.725 "data_size": 63488 00:08:38.725 } 00:08:38.725 ] 00:08:38.725 }' 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.725 11:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.984 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:38.984 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:38.984 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:38.984 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:38.984 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:38.984 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:38.984 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:38.984 11:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.984 11:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.984 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:38.984 [2024-11-20 11:18:22.096115] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.242 11:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.242 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:39.242 "name": "Existed_Raid", 00:08:39.242 "aliases": [ 00:08:39.242 "c60c76d5-b08f-4137-ae31-93da11093197" 00:08:39.242 ], 00:08:39.242 "product_name": "Raid Volume", 00:08:39.242 "block_size": 512, 00:08:39.242 "num_blocks": 126976, 00:08:39.242 "uuid": "c60c76d5-b08f-4137-ae31-93da11093197", 00:08:39.242 "assigned_rate_limits": { 00:08:39.242 "rw_ios_per_sec": 0, 00:08:39.242 "rw_mbytes_per_sec": 0, 00:08:39.242 "r_mbytes_per_sec": 0, 00:08:39.242 "w_mbytes_per_sec": 0 00:08:39.242 }, 00:08:39.242 "claimed": false, 00:08:39.242 "zoned": false, 00:08:39.242 "supported_io_types": { 00:08:39.242 "read": true, 00:08:39.242 "write": true, 00:08:39.242 "unmap": true, 00:08:39.242 "flush": true, 00:08:39.242 "reset": true, 00:08:39.242 "nvme_admin": false, 00:08:39.242 "nvme_io": false, 00:08:39.242 "nvme_io_md": false, 00:08:39.242 "write_zeroes": true, 00:08:39.242 "zcopy": false, 00:08:39.242 "get_zone_info": false, 00:08:39.242 "zone_management": false, 00:08:39.242 "zone_append": false, 00:08:39.242 "compare": false, 00:08:39.242 "compare_and_write": false, 00:08:39.242 "abort": false, 00:08:39.242 "seek_hole": false, 00:08:39.242 "seek_data": false, 00:08:39.242 "copy": false, 00:08:39.242 "nvme_iov_md": false 00:08:39.242 }, 00:08:39.242 "memory_domains": [ 00:08:39.242 { 00:08:39.242 "dma_device_id": "system", 00:08:39.242 "dma_device_type": 1 00:08:39.242 }, 00:08:39.242 { 00:08:39.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.242 "dma_device_type": 2 00:08:39.242 }, 00:08:39.242 { 00:08:39.243 "dma_device_id": "system", 00:08:39.243 "dma_device_type": 1 00:08:39.243 }, 00:08:39.243 { 00:08:39.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.243 "dma_device_type": 2 00:08:39.243 } 00:08:39.243 ], 00:08:39.243 "driver_specific": { 00:08:39.243 "raid": { 00:08:39.243 "uuid": "c60c76d5-b08f-4137-ae31-93da11093197", 00:08:39.243 "strip_size_kb": 64, 00:08:39.243 "state": "online", 00:08:39.243 "raid_level": "raid0", 00:08:39.243 "superblock": true, 00:08:39.243 "num_base_bdevs": 2, 00:08:39.243 "num_base_bdevs_discovered": 2, 00:08:39.243 "num_base_bdevs_operational": 2, 00:08:39.243 "base_bdevs_list": [ 00:08:39.243 { 00:08:39.243 "name": "BaseBdev1", 00:08:39.243 "uuid": "12bae8b8-bd1e-4cfc-94a8-7d9356aafc82", 00:08:39.243 "is_configured": true, 00:08:39.243 "data_offset": 2048, 00:08:39.243 "data_size": 63488 00:08:39.243 }, 00:08:39.243 { 00:08:39.243 "name": "BaseBdev2", 00:08:39.243 "uuid": "68c72698-c68c-4877-8d01-68ee2aa540e0", 00:08:39.243 "is_configured": true, 00:08:39.243 "data_offset": 2048, 00:08:39.243 "data_size": 63488 00:08:39.243 } 00:08:39.243 ] 00:08:39.243 } 00:08:39.243 } 00:08:39.243 }' 00:08:39.243 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:39.243 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:39.243 BaseBdev2' 00:08:39.243 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.243 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:39.243 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.243 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:39.243 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.243 11:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.243 11:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.243 11:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.243 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.243 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.243 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.243 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:39.243 11:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.243 11:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.243 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.243 11:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.243 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.243 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.243 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:39.243 11:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.243 11:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.243 [2024-11-20 11:18:22.339652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:39.243 [2024-11-20 11:18:22.339744] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:39.243 [2024-11-20 11:18:22.339840] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.503 11:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.503 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:39.503 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:39.503 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:39.503 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:39.503 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:39.503 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:39.503 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.503 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:39.503 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.503 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.503 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:39.503 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.503 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.503 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.503 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.503 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.503 11:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.504 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.504 11:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.504 11:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.504 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.504 "name": "Existed_Raid", 00:08:39.504 "uuid": "c60c76d5-b08f-4137-ae31-93da11093197", 00:08:39.504 "strip_size_kb": 64, 00:08:39.504 "state": "offline", 00:08:39.504 "raid_level": "raid0", 00:08:39.504 "superblock": true, 00:08:39.504 "num_base_bdevs": 2, 00:08:39.504 "num_base_bdevs_discovered": 1, 00:08:39.504 "num_base_bdevs_operational": 1, 00:08:39.504 "base_bdevs_list": [ 00:08:39.504 { 00:08:39.504 "name": null, 00:08:39.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.504 "is_configured": false, 00:08:39.504 "data_offset": 0, 00:08:39.504 "data_size": 63488 00:08:39.504 }, 00:08:39.504 { 00:08:39.504 "name": "BaseBdev2", 00:08:39.504 "uuid": "68c72698-c68c-4877-8d01-68ee2aa540e0", 00:08:39.504 "is_configured": true, 00:08:39.504 "data_offset": 2048, 00:08:39.504 "data_size": 63488 00:08:39.504 } 00:08:39.504 ] 00:08:39.504 }' 00:08:39.504 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.504 11:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.072 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:40.072 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:40.072 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.072 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:40.072 11:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.072 11:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.072 11:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.072 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:40.072 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:40.072 11:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:40.072 11:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.072 11:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.072 [2024-11-20 11:18:22.963120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:40.072 [2024-11-20 11:18:22.963238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:40.072 11:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.072 11:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:40.072 11:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:40.072 11:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.072 11:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.072 11:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.072 11:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:40.072 11:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.072 11:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:40.072 11:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:40.072 11:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:40.072 11:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61045 00:08:40.072 11:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61045 ']' 00:08:40.072 11:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61045 00:08:40.072 11:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:40.072 11:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:40.072 11:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61045 00:08:40.072 killing process with pid 61045 00:08:40.072 11:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:40.072 11:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:40.072 11:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61045' 00:08:40.073 11:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61045 00:08:40.073 [2024-11-20 11:18:23.153034] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:40.073 11:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61045 00:08:40.073 [2024-11-20 11:18:23.171654] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:41.449 ************************************ 00:08:41.449 END TEST raid_state_function_test_sb 00:08:41.449 ************************************ 00:08:41.449 11:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:41.449 00:08:41.449 real 0m5.227s 00:08:41.449 user 0m7.561s 00:08:41.449 sys 0m0.831s 00:08:41.449 11:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.449 11:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.449 11:18:24 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:08:41.449 11:18:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:41.449 11:18:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.449 11:18:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:41.449 ************************************ 00:08:41.449 START TEST raid_superblock_test 00:08:41.449 ************************************ 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61297 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61297 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61297 ']' 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.449 11:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.449 [2024-11-20 11:18:24.488022] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:08:41.449 [2024-11-20 11:18:24.488238] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61297 ] 00:08:41.706 [2024-11-20 11:18:24.665616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.706 [2024-11-20 11:18:24.785132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.965 [2024-11-20 11:18:24.992825] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.965 [2024-11-20 11:18:24.992856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.532 malloc1 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.532 [2024-11-20 11:18:25.387599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:42.532 [2024-11-20 11:18:25.387728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.532 [2024-11-20 11:18:25.387763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:42.532 [2024-11-20 11:18:25.387775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.532 [2024-11-20 11:18:25.390182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.532 [2024-11-20 11:18:25.390223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:42.532 pt1 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:42.532 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.533 malloc2 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.533 [2024-11-20 11:18:25.442501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:42.533 [2024-11-20 11:18:25.442673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.533 [2024-11-20 11:18:25.442743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:42.533 [2024-11-20 11:18:25.442788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.533 [2024-11-20 11:18:25.445244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.533 [2024-11-20 11:18:25.445332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:42.533 pt2 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.533 [2024-11-20 11:18:25.454551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:42.533 [2024-11-20 11:18:25.456711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:42.533 [2024-11-20 11:18:25.456956] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:42.533 [2024-11-20 11:18:25.457013] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:42.533 [2024-11-20 11:18:25.457335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:42.533 [2024-11-20 11:18:25.457575] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:42.533 [2024-11-20 11:18:25.457629] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:42.533 [2024-11-20 11:18:25.457858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.533 "name": "raid_bdev1", 00:08:42.533 "uuid": "9e7329e6-625d-4e73-bd6f-b05fd8323127", 00:08:42.533 "strip_size_kb": 64, 00:08:42.533 "state": "online", 00:08:42.533 "raid_level": "raid0", 00:08:42.533 "superblock": true, 00:08:42.533 "num_base_bdevs": 2, 00:08:42.533 "num_base_bdevs_discovered": 2, 00:08:42.533 "num_base_bdevs_operational": 2, 00:08:42.533 "base_bdevs_list": [ 00:08:42.533 { 00:08:42.533 "name": "pt1", 00:08:42.533 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.533 "is_configured": true, 00:08:42.533 "data_offset": 2048, 00:08:42.533 "data_size": 63488 00:08:42.533 }, 00:08:42.533 { 00:08:42.533 "name": "pt2", 00:08:42.533 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.533 "is_configured": true, 00:08:42.533 "data_offset": 2048, 00:08:42.533 "data_size": 63488 00:08:42.533 } 00:08:42.533 ] 00:08:42.533 }' 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.533 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.792 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:42.793 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:42.793 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:42.793 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:42.793 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:42.793 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:42.793 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:42.793 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:42.793 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.793 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.793 [2024-11-20 11:18:25.874095] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.793 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.052 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:43.052 "name": "raid_bdev1", 00:08:43.052 "aliases": [ 00:08:43.052 "9e7329e6-625d-4e73-bd6f-b05fd8323127" 00:08:43.052 ], 00:08:43.052 "product_name": "Raid Volume", 00:08:43.052 "block_size": 512, 00:08:43.052 "num_blocks": 126976, 00:08:43.052 "uuid": "9e7329e6-625d-4e73-bd6f-b05fd8323127", 00:08:43.052 "assigned_rate_limits": { 00:08:43.052 "rw_ios_per_sec": 0, 00:08:43.052 "rw_mbytes_per_sec": 0, 00:08:43.052 "r_mbytes_per_sec": 0, 00:08:43.052 "w_mbytes_per_sec": 0 00:08:43.052 }, 00:08:43.052 "claimed": false, 00:08:43.052 "zoned": false, 00:08:43.052 "supported_io_types": { 00:08:43.052 "read": true, 00:08:43.052 "write": true, 00:08:43.052 "unmap": true, 00:08:43.052 "flush": true, 00:08:43.052 "reset": true, 00:08:43.052 "nvme_admin": false, 00:08:43.052 "nvme_io": false, 00:08:43.052 "nvme_io_md": false, 00:08:43.052 "write_zeroes": true, 00:08:43.052 "zcopy": false, 00:08:43.052 "get_zone_info": false, 00:08:43.052 "zone_management": false, 00:08:43.052 "zone_append": false, 00:08:43.052 "compare": false, 00:08:43.052 "compare_and_write": false, 00:08:43.052 "abort": false, 00:08:43.052 "seek_hole": false, 00:08:43.052 "seek_data": false, 00:08:43.052 "copy": false, 00:08:43.052 "nvme_iov_md": false 00:08:43.052 }, 00:08:43.052 "memory_domains": [ 00:08:43.052 { 00:08:43.052 "dma_device_id": "system", 00:08:43.052 "dma_device_type": 1 00:08:43.052 }, 00:08:43.052 { 00:08:43.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.052 "dma_device_type": 2 00:08:43.052 }, 00:08:43.052 { 00:08:43.052 "dma_device_id": "system", 00:08:43.052 "dma_device_type": 1 00:08:43.052 }, 00:08:43.052 { 00:08:43.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.052 "dma_device_type": 2 00:08:43.052 } 00:08:43.052 ], 00:08:43.052 "driver_specific": { 00:08:43.052 "raid": { 00:08:43.052 "uuid": "9e7329e6-625d-4e73-bd6f-b05fd8323127", 00:08:43.052 "strip_size_kb": 64, 00:08:43.053 "state": "online", 00:08:43.053 "raid_level": "raid0", 00:08:43.053 "superblock": true, 00:08:43.053 "num_base_bdevs": 2, 00:08:43.053 "num_base_bdevs_discovered": 2, 00:08:43.053 "num_base_bdevs_operational": 2, 00:08:43.053 "base_bdevs_list": [ 00:08:43.053 { 00:08:43.053 "name": "pt1", 00:08:43.053 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.053 "is_configured": true, 00:08:43.053 "data_offset": 2048, 00:08:43.053 "data_size": 63488 00:08:43.053 }, 00:08:43.053 { 00:08:43.053 "name": "pt2", 00:08:43.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.053 "is_configured": true, 00:08:43.053 "data_offset": 2048, 00:08:43.053 "data_size": 63488 00:08:43.053 } 00:08:43.053 ] 00:08:43.053 } 00:08:43.053 } 00:08:43.053 }' 00:08:43.053 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:43.053 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:43.053 pt2' 00:08:43.053 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.053 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:43.053 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.053 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:43.053 11:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.053 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.053 11:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.053 [2024-11-20 11:18:26.109715] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9e7329e6-625d-4e73-bd6f-b05fd8323127 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9e7329e6-625d-4e73-bd6f-b05fd8323127 ']' 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.053 [2024-11-20 11:18:26.137313] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:43.053 [2024-11-20 11:18:26.137394] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:43.053 [2024-11-20 11:18:26.137534] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:43.053 [2024-11-20 11:18:26.137626] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:43.053 [2024-11-20 11:18:26.137687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.053 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.313 [2024-11-20 11:18:26.285116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:43.313 [2024-11-20 11:18:26.287330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:43.313 [2024-11-20 11:18:26.287488] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:43.313 [2024-11-20 11:18:26.287608] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:43.313 [2024-11-20 11:18:26.287671] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:43.313 [2024-11-20 11:18:26.287710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:43.313 request: 00:08:43.313 { 00:08:43.313 "name": "raid_bdev1", 00:08:43.313 "raid_level": "raid0", 00:08:43.313 "base_bdevs": [ 00:08:43.313 "malloc1", 00:08:43.313 "malloc2" 00:08:43.313 ], 00:08:43.313 "strip_size_kb": 64, 00:08:43.313 "superblock": false, 00:08:43.313 "method": "bdev_raid_create", 00:08:43.313 "req_id": 1 00:08:43.313 } 00:08:43.313 Got JSON-RPC error response 00:08:43.313 response: 00:08:43.313 { 00:08:43.313 "code": -17, 00:08:43.313 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:43.313 } 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.313 [2024-11-20 11:18:26.353011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:43.313 [2024-11-20 11:18:26.353176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.313 [2024-11-20 11:18:26.353231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:43.313 [2024-11-20 11:18:26.353271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.313 [2024-11-20 11:18:26.355838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.313 [2024-11-20 11:18:26.355950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:43.313 [2024-11-20 11:18:26.356086] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:43.313 [2024-11-20 11:18:26.356197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:43.313 pt1 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:08:43.313 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.314 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.314 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.314 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.314 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:43.314 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.314 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.314 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.314 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.314 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.314 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.314 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.314 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.314 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.314 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.314 "name": "raid_bdev1", 00:08:43.314 "uuid": "9e7329e6-625d-4e73-bd6f-b05fd8323127", 00:08:43.314 "strip_size_kb": 64, 00:08:43.314 "state": "configuring", 00:08:43.314 "raid_level": "raid0", 00:08:43.314 "superblock": true, 00:08:43.314 "num_base_bdevs": 2, 00:08:43.314 "num_base_bdevs_discovered": 1, 00:08:43.314 "num_base_bdevs_operational": 2, 00:08:43.314 "base_bdevs_list": [ 00:08:43.314 { 00:08:43.314 "name": "pt1", 00:08:43.314 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.314 "is_configured": true, 00:08:43.314 "data_offset": 2048, 00:08:43.314 "data_size": 63488 00:08:43.314 }, 00:08:43.314 { 00:08:43.314 "name": null, 00:08:43.314 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.314 "is_configured": false, 00:08:43.314 "data_offset": 2048, 00:08:43.314 "data_size": 63488 00:08:43.314 } 00:08:43.314 ] 00:08:43.314 }' 00:08:43.314 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.314 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.884 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:43.884 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:43.884 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:43.884 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:43.884 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.884 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.884 [2024-11-20 11:18:26.836183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:43.884 [2024-11-20 11:18:26.836361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.884 [2024-11-20 11:18:26.836412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:43.884 [2024-11-20 11:18:26.836473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.884 [2024-11-20 11:18:26.837042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.884 [2024-11-20 11:18:26.837126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:43.884 [2024-11-20 11:18:26.837265] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:43.884 [2024-11-20 11:18:26.837329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:43.884 [2024-11-20 11:18:26.837520] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:43.884 [2024-11-20 11:18:26.837571] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:43.884 [2024-11-20 11:18:26.837870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:43.884 [2024-11-20 11:18:26.838109] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:43.884 [2024-11-20 11:18:26.838159] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:43.884 [2024-11-20 11:18:26.838369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.884 pt2 00:08:43.884 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.884 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:43.884 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:43.884 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:43.884 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.884 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.884 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.884 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.884 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:43.884 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.884 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.884 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.884 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.884 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.884 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.884 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.884 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.884 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.884 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.884 "name": "raid_bdev1", 00:08:43.884 "uuid": "9e7329e6-625d-4e73-bd6f-b05fd8323127", 00:08:43.884 "strip_size_kb": 64, 00:08:43.884 "state": "online", 00:08:43.884 "raid_level": "raid0", 00:08:43.884 "superblock": true, 00:08:43.884 "num_base_bdevs": 2, 00:08:43.884 "num_base_bdevs_discovered": 2, 00:08:43.884 "num_base_bdevs_operational": 2, 00:08:43.884 "base_bdevs_list": [ 00:08:43.884 { 00:08:43.884 "name": "pt1", 00:08:43.884 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.884 "is_configured": true, 00:08:43.884 "data_offset": 2048, 00:08:43.884 "data_size": 63488 00:08:43.884 }, 00:08:43.884 { 00:08:43.884 "name": "pt2", 00:08:43.884 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.884 "is_configured": true, 00:08:43.884 "data_offset": 2048, 00:08:43.884 "data_size": 63488 00:08:43.884 } 00:08:43.885 ] 00:08:43.885 }' 00:08:43.885 11:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.885 11:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.453 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:44.453 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:44.453 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:44.453 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:44.453 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:44.453 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:44.453 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:44.453 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:44.453 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.453 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.453 [2024-11-20 11:18:27.311906] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.453 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.453 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:44.453 "name": "raid_bdev1", 00:08:44.453 "aliases": [ 00:08:44.453 "9e7329e6-625d-4e73-bd6f-b05fd8323127" 00:08:44.453 ], 00:08:44.453 "product_name": "Raid Volume", 00:08:44.453 "block_size": 512, 00:08:44.453 "num_blocks": 126976, 00:08:44.453 "uuid": "9e7329e6-625d-4e73-bd6f-b05fd8323127", 00:08:44.453 "assigned_rate_limits": { 00:08:44.453 "rw_ios_per_sec": 0, 00:08:44.453 "rw_mbytes_per_sec": 0, 00:08:44.453 "r_mbytes_per_sec": 0, 00:08:44.453 "w_mbytes_per_sec": 0 00:08:44.453 }, 00:08:44.453 "claimed": false, 00:08:44.453 "zoned": false, 00:08:44.453 "supported_io_types": { 00:08:44.453 "read": true, 00:08:44.453 "write": true, 00:08:44.453 "unmap": true, 00:08:44.453 "flush": true, 00:08:44.453 "reset": true, 00:08:44.453 "nvme_admin": false, 00:08:44.453 "nvme_io": false, 00:08:44.453 "nvme_io_md": false, 00:08:44.453 "write_zeroes": true, 00:08:44.453 "zcopy": false, 00:08:44.453 "get_zone_info": false, 00:08:44.453 "zone_management": false, 00:08:44.453 "zone_append": false, 00:08:44.453 "compare": false, 00:08:44.453 "compare_and_write": false, 00:08:44.453 "abort": false, 00:08:44.453 "seek_hole": false, 00:08:44.453 "seek_data": false, 00:08:44.453 "copy": false, 00:08:44.453 "nvme_iov_md": false 00:08:44.453 }, 00:08:44.453 "memory_domains": [ 00:08:44.453 { 00:08:44.453 "dma_device_id": "system", 00:08:44.453 "dma_device_type": 1 00:08:44.453 }, 00:08:44.453 { 00:08:44.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.453 "dma_device_type": 2 00:08:44.453 }, 00:08:44.453 { 00:08:44.453 "dma_device_id": "system", 00:08:44.453 "dma_device_type": 1 00:08:44.453 }, 00:08:44.453 { 00:08:44.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.453 "dma_device_type": 2 00:08:44.453 } 00:08:44.453 ], 00:08:44.453 "driver_specific": { 00:08:44.453 "raid": { 00:08:44.453 "uuid": "9e7329e6-625d-4e73-bd6f-b05fd8323127", 00:08:44.453 "strip_size_kb": 64, 00:08:44.453 "state": "online", 00:08:44.453 "raid_level": "raid0", 00:08:44.453 "superblock": true, 00:08:44.453 "num_base_bdevs": 2, 00:08:44.453 "num_base_bdevs_discovered": 2, 00:08:44.453 "num_base_bdevs_operational": 2, 00:08:44.453 "base_bdevs_list": [ 00:08:44.453 { 00:08:44.453 "name": "pt1", 00:08:44.453 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.453 "is_configured": true, 00:08:44.453 "data_offset": 2048, 00:08:44.453 "data_size": 63488 00:08:44.453 }, 00:08:44.453 { 00:08:44.453 "name": "pt2", 00:08:44.453 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.453 "is_configured": true, 00:08:44.453 "data_offset": 2048, 00:08:44.453 "data_size": 63488 00:08:44.453 } 00:08:44.453 ] 00:08:44.453 } 00:08:44.453 } 00:08:44.453 }' 00:08:44.453 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:44.453 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:44.453 pt2' 00:08:44.453 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.453 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.454 [2024-11-20 11:18:27.507635] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9e7329e6-625d-4e73-bd6f-b05fd8323127 '!=' 9e7329e6-625d-4e73-bd6f-b05fd8323127 ']' 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61297 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61297 ']' 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61297 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.454 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61297 00:08:44.713 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:44.713 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:44.713 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61297' 00:08:44.713 killing process with pid 61297 00:08:44.713 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61297 00:08:44.713 [2024-11-20 11:18:27.583752] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:44.714 11:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61297 00:08:44.714 [2024-11-20 11:18:27.583956] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.714 [2024-11-20 11:18:27.584026] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.714 [2024-11-20 11:18:27.584039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:44.714 [2024-11-20 11:18:27.825077] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:46.095 11:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:46.095 00:08:46.095 real 0m4.674s 00:08:46.095 user 0m6.514s 00:08:46.095 sys 0m0.730s 00:08:46.095 11:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.095 11:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.095 ************************************ 00:08:46.095 END TEST raid_superblock_test 00:08:46.095 ************************************ 00:08:46.095 11:18:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:08:46.095 11:18:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:46.095 11:18:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.095 11:18:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:46.095 ************************************ 00:08:46.095 START TEST raid_read_error_test 00:08:46.095 ************************************ 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.N8tuyES5wN 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61509 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61509 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61509 ']' 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.095 11:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.355 [2024-11-20 11:18:29.231469] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:08:46.355 [2024-11-20 11:18:29.231711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61509 ] 00:08:46.355 [2024-11-20 11:18:29.407605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.614 [2024-11-20 11:18:29.540601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.873 [2024-11-20 11:18:29.769791] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.873 [2024-11-20 11:18:29.769942] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.133 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.133 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:47.133 11:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:47.133 11:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:47.133 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.133 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.133 BaseBdev1_malloc 00:08:47.133 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.133 11:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:47.133 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.133 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.133 true 00:08:47.133 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.133 11:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:47.133 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.133 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.133 [2024-11-20 11:18:30.192187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:47.133 [2024-11-20 11:18:30.192355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.133 [2024-11-20 11:18:30.192406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:47.133 [2024-11-20 11:18:30.192464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.133 [2024-11-20 11:18:30.194955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.133 [2024-11-20 11:18:30.195072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:47.133 BaseBdev1 00:08:47.133 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.133 11:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:47.133 11:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:47.133 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.133 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.133 BaseBdev2_malloc 00:08:47.133 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.133 11:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:47.133 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.133 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.394 true 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.394 [2024-11-20 11:18:30.263748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:47.394 [2024-11-20 11:18:30.263927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.394 [2024-11-20 11:18:30.263975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:47.394 [2024-11-20 11:18:30.264016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.394 [2024-11-20 11:18:30.266536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.394 [2024-11-20 11:18:30.266656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:47.394 BaseBdev2 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.394 [2024-11-20 11:18:30.275802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:47.394 [2024-11-20 11:18:30.278040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:47.394 [2024-11-20 11:18:30.278317] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:47.394 [2024-11-20 11:18:30.278341] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:47.394 [2024-11-20 11:18:30.278664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:47.394 [2024-11-20 11:18:30.278886] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:47.394 [2024-11-20 11:18:30.278901] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:47.394 [2024-11-20 11:18:30.279101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.394 "name": "raid_bdev1", 00:08:47.394 "uuid": "19140041-5a45-4972-908f-531eba179052", 00:08:47.394 "strip_size_kb": 64, 00:08:47.394 "state": "online", 00:08:47.394 "raid_level": "raid0", 00:08:47.394 "superblock": true, 00:08:47.394 "num_base_bdevs": 2, 00:08:47.394 "num_base_bdevs_discovered": 2, 00:08:47.394 "num_base_bdevs_operational": 2, 00:08:47.394 "base_bdevs_list": [ 00:08:47.394 { 00:08:47.394 "name": "BaseBdev1", 00:08:47.394 "uuid": "01fabc13-c9f0-52d7-8653-01b3f8c5262c", 00:08:47.394 "is_configured": true, 00:08:47.394 "data_offset": 2048, 00:08:47.394 "data_size": 63488 00:08:47.394 }, 00:08:47.394 { 00:08:47.394 "name": "BaseBdev2", 00:08:47.394 "uuid": "c3e2af50-d0b3-5316-aead-13e79a7f1961", 00:08:47.394 "is_configured": true, 00:08:47.394 "data_offset": 2048, 00:08:47.394 "data_size": 63488 00:08:47.394 } 00:08:47.394 ] 00:08:47.394 }' 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.394 11:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.655 11:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:47.655 11:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:47.914 [2024-11-20 11:18:30.824837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:48.915 11:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:48.915 11:18:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.915 11:18:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.915 11:18:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.915 11:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:48.915 11:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:48.915 11:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:48.915 11:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:48.915 11:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.916 11:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.916 11:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.916 11:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.916 11:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.916 11:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.916 11:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.916 11:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.916 11:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.916 11:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.916 11:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.916 11:18:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.916 11:18:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.916 11:18:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.916 11:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.916 "name": "raid_bdev1", 00:08:48.916 "uuid": "19140041-5a45-4972-908f-531eba179052", 00:08:48.916 "strip_size_kb": 64, 00:08:48.916 "state": "online", 00:08:48.916 "raid_level": "raid0", 00:08:48.916 "superblock": true, 00:08:48.916 "num_base_bdevs": 2, 00:08:48.916 "num_base_bdevs_discovered": 2, 00:08:48.916 "num_base_bdevs_operational": 2, 00:08:48.916 "base_bdevs_list": [ 00:08:48.916 { 00:08:48.916 "name": "BaseBdev1", 00:08:48.916 "uuid": "01fabc13-c9f0-52d7-8653-01b3f8c5262c", 00:08:48.916 "is_configured": true, 00:08:48.916 "data_offset": 2048, 00:08:48.916 "data_size": 63488 00:08:48.916 }, 00:08:48.916 { 00:08:48.916 "name": "BaseBdev2", 00:08:48.916 "uuid": "c3e2af50-d0b3-5316-aead-13e79a7f1961", 00:08:48.916 "is_configured": true, 00:08:48.916 "data_offset": 2048, 00:08:48.916 "data_size": 63488 00:08:48.916 } 00:08:48.916 ] 00:08:48.916 }' 00:08:48.916 11:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.916 11:18:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.176 11:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:49.176 11:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.176 11:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.176 [2024-11-20 11:18:32.177686] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:49.176 [2024-11-20 11:18:32.177809] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.177 [2024-11-20 11:18:32.180937] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.177 [2024-11-20 11:18:32.181039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.177 [2024-11-20 11:18:32.181110] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:49.177 [2024-11-20 11:18:32.181159] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:49.177 { 00:08:49.177 "results": [ 00:08:49.177 { 00:08:49.177 "job": "raid_bdev1", 00:08:49.177 "core_mask": "0x1", 00:08:49.177 "workload": "randrw", 00:08:49.177 "percentage": 50, 00:08:49.177 "status": "finished", 00:08:49.177 "queue_depth": 1, 00:08:49.177 "io_size": 131072, 00:08:49.177 "runtime": 1.353485, 00:08:49.177 "iops": 14116.152007595208, 00:08:49.177 "mibps": 1764.519000949401, 00:08:49.177 "io_failed": 1, 00:08:49.177 "io_timeout": 0, 00:08:49.177 "avg_latency_us": 98.24061864430216, 00:08:49.177 "min_latency_us": 28.39475982532751, 00:08:49.177 "max_latency_us": 1581.1633187772925 00:08:49.177 } 00:08:49.177 ], 00:08:49.177 "core_count": 1 00:08:49.177 } 00:08:49.177 11:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.177 11:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61509 00:08:49.177 11:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61509 ']' 00:08:49.177 11:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61509 00:08:49.177 11:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:49.177 11:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:49.177 11:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61509 00:08:49.177 killing process with pid 61509 00:08:49.177 11:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:49.177 11:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:49.177 11:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61509' 00:08:49.177 11:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61509 00:08:49.177 [2024-11-20 11:18:32.219437] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:49.177 11:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61509 00:08:49.437 [2024-11-20 11:18:32.370026] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:50.818 11:18:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.N8tuyES5wN 00:08:50.818 11:18:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:50.818 11:18:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:50.818 11:18:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:50.818 11:18:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:50.818 11:18:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:50.818 11:18:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:50.818 ************************************ 00:08:50.818 END TEST raid_read_error_test 00:08:50.818 11:18:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:50.818 00:08:50.818 real 0m4.591s 00:08:50.818 user 0m5.524s 00:08:50.818 sys 0m0.502s 00:08:50.818 11:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.818 11:18:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.818 ************************************ 00:08:50.819 11:18:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:08:50.819 11:18:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:50.819 11:18:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.819 11:18:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:50.819 ************************************ 00:08:50.819 START TEST raid_write_error_test 00:08:50.819 ************************************ 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NSBeoUPKpb 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61654 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61654 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61654 ']' 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:50.819 11:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.819 [2024-11-20 11:18:33.876963] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:08:50.819 [2024-11-20 11:18:33.877103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61654 ] 00:08:51.078 [2024-11-20 11:18:34.038326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.078 [2024-11-20 11:18:34.163003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.338 [2024-11-20 11:18:34.401258] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.338 [2024-11-20 11:18:34.401314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.908 BaseBdev1_malloc 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.908 true 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.908 [2024-11-20 11:18:34.833200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:51.908 [2024-11-20 11:18:34.833350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.908 [2024-11-20 11:18:34.833398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:51.908 [2024-11-20 11:18:34.833434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.908 [2024-11-20 11:18:34.835869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.908 [2024-11-20 11:18:34.835979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:51.908 BaseBdev1 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.908 BaseBdev2_malloc 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.908 true 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.908 [2024-11-20 11:18:34.907674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:51.908 [2024-11-20 11:18:34.907840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.908 [2024-11-20 11:18:34.907901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:51.908 [2024-11-20 11:18:34.907940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.908 [2024-11-20 11:18:34.910522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.908 [2024-11-20 11:18:34.910634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:51.908 BaseBdev2 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.908 [2024-11-20 11:18:34.919735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:51.908 [2024-11-20 11:18:34.921992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:51.908 [2024-11-20 11:18:34.922308] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:51.908 [2024-11-20 11:18:34.922370] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:51.908 [2024-11-20 11:18:34.922741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:51.908 [2024-11-20 11:18:34.923015] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:51.908 [2024-11-20 11:18:34.923068] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:51.908 [2024-11-20 11:18:34.923327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.908 11:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:51.909 11:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.909 11:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.909 11:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.909 11:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.909 11:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.909 11:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.909 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.909 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.909 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.909 11:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.909 "name": "raid_bdev1", 00:08:51.909 "uuid": "9f617b1c-6272-4476-b7bb-ae31f2ed94b2", 00:08:51.909 "strip_size_kb": 64, 00:08:51.909 "state": "online", 00:08:51.909 "raid_level": "raid0", 00:08:51.909 "superblock": true, 00:08:51.909 "num_base_bdevs": 2, 00:08:51.909 "num_base_bdevs_discovered": 2, 00:08:51.909 "num_base_bdevs_operational": 2, 00:08:51.909 "base_bdevs_list": [ 00:08:51.909 { 00:08:51.909 "name": "BaseBdev1", 00:08:51.909 "uuid": "01b5066c-95cb-5170-a2c1-e67c210e0f2b", 00:08:51.909 "is_configured": true, 00:08:51.909 "data_offset": 2048, 00:08:51.909 "data_size": 63488 00:08:51.909 }, 00:08:51.909 { 00:08:51.909 "name": "BaseBdev2", 00:08:51.909 "uuid": "ccf9b34c-d4b7-5b28-b69e-ed11ac8c5271", 00:08:51.909 "is_configured": true, 00:08:51.909 "data_offset": 2048, 00:08:51.909 "data_size": 63488 00:08:51.909 } 00:08:51.909 ] 00:08:51.909 }' 00:08:51.909 11:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.909 11:18:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.482 11:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:52.482 11:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:52.482 [2024-11-20 11:18:35.468519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:53.426 11:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:53.426 11:18:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.426 11:18:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.426 11:18:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.426 11:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:53.426 11:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:53.426 11:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:53.426 11:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:53.426 11:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.426 11:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.426 11:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.426 11:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.426 11:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:53.426 11:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.426 11:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.426 11:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.426 11:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.426 11:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.426 11:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.426 11:18:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.426 11:18:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.426 11:18:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.426 11:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.426 "name": "raid_bdev1", 00:08:53.426 "uuid": "9f617b1c-6272-4476-b7bb-ae31f2ed94b2", 00:08:53.426 "strip_size_kb": 64, 00:08:53.426 "state": "online", 00:08:53.426 "raid_level": "raid0", 00:08:53.426 "superblock": true, 00:08:53.426 "num_base_bdevs": 2, 00:08:53.426 "num_base_bdevs_discovered": 2, 00:08:53.426 "num_base_bdevs_operational": 2, 00:08:53.426 "base_bdevs_list": [ 00:08:53.426 { 00:08:53.426 "name": "BaseBdev1", 00:08:53.426 "uuid": "01b5066c-95cb-5170-a2c1-e67c210e0f2b", 00:08:53.426 "is_configured": true, 00:08:53.426 "data_offset": 2048, 00:08:53.426 "data_size": 63488 00:08:53.426 }, 00:08:53.426 { 00:08:53.426 "name": "BaseBdev2", 00:08:53.426 "uuid": "ccf9b34c-d4b7-5b28-b69e-ed11ac8c5271", 00:08:53.426 "is_configured": true, 00:08:53.426 "data_offset": 2048, 00:08:53.426 "data_size": 63488 00:08:53.426 } 00:08:53.426 ] 00:08:53.426 }' 00:08:53.426 11:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.426 11:18:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.995 11:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:53.995 11:18:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.995 11:18:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.995 [2024-11-20 11:18:36.809751] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:53.995 [2024-11-20 11:18:36.809860] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:53.995 [2024-11-20 11:18:36.813157] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:53.995 [2024-11-20 11:18:36.813258] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.996 [2024-11-20 11:18:36.813301] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:53.996 [2024-11-20 11:18:36.813314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:53.996 { 00:08:53.996 "results": [ 00:08:53.996 { 00:08:53.996 "job": "raid_bdev1", 00:08:53.996 "core_mask": "0x1", 00:08:53.996 "workload": "randrw", 00:08:53.996 "percentage": 50, 00:08:53.996 "status": "finished", 00:08:53.996 "queue_depth": 1, 00:08:53.996 "io_size": 131072, 00:08:53.996 "runtime": 1.341756, 00:08:53.996 "iops": 13471.15272821586, 00:08:53.996 "mibps": 1683.8940910269826, 00:08:53.996 "io_failed": 1, 00:08:53.996 "io_timeout": 0, 00:08:53.996 "avg_latency_us": 102.97757957425756, 00:08:53.996 "min_latency_us": 29.289082969432314, 00:08:53.996 "max_latency_us": 1681.3275109170306 00:08:53.996 } 00:08:53.996 ], 00:08:53.996 "core_count": 1 00:08:53.996 } 00:08:53.996 11:18:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.996 11:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61654 00:08:53.996 11:18:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61654 ']' 00:08:53.996 11:18:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61654 00:08:53.996 11:18:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:53.996 11:18:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:53.996 11:18:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61654 00:08:53.996 killing process with pid 61654 00:08:53.996 11:18:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:53.996 11:18:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:53.996 11:18:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61654' 00:08:53.996 11:18:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61654 00:08:53.996 [2024-11-20 11:18:36.847893] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:53.996 11:18:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61654 00:08:53.996 [2024-11-20 11:18:37.009821] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:55.372 11:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:55.372 11:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:55.372 11:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NSBeoUPKpb 00:08:55.372 11:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:08:55.372 11:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:55.372 11:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:55.372 11:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:55.372 11:18:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:08:55.372 00:08:55.372 real 0m4.569s 00:08:55.372 user 0m5.524s 00:08:55.372 sys 0m0.500s 00:08:55.372 ************************************ 00:08:55.372 END TEST raid_write_error_test 00:08:55.372 ************************************ 00:08:55.372 11:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.372 11:18:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.372 11:18:38 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:55.372 11:18:38 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:08:55.372 11:18:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:55.372 11:18:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.372 11:18:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:55.373 ************************************ 00:08:55.373 START TEST raid_state_function_test 00:08:55.373 ************************************ 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:55.373 Process raid pid: 61798 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61798 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61798' 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61798 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61798 ']' 00:08:55.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.373 11:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.632 [2024-11-20 11:18:38.510832] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:08:55.632 [2024-11-20 11:18:38.510975] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.632 [2024-11-20 11:18:38.691759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.890 [2024-11-20 11:18:38.824647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.216 [2024-11-20 11:18:39.059276] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.216 [2024-11-20 11:18:39.059323] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.484 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.484 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:56.484 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:56.484 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.484 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.484 [2024-11-20 11:18:39.421134] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:56.484 [2024-11-20 11:18:39.421298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:56.484 [2024-11-20 11:18:39.421355] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:56.484 [2024-11-20 11:18:39.421412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:56.484 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.484 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:56.484 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.484 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.484 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.484 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.484 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:56.484 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.484 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.484 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.484 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.484 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.484 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.484 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.484 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.484 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.484 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.484 "name": "Existed_Raid", 00:08:56.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.484 "strip_size_kb": 64, 00:08:56.484 "state": "configuring", 00:08:56.484 "raid_level": "concat", 00:08:56.484 "superblock": false, 00:08:56.484 "num_base_bdevs": 2, 00:08:56.484 "num_base_bdevs_discovered": 0, 00:08:56.484 "num_base_bdevs_operational": 2, 00:08:56.484 "base_bdevs_list": [ 00:08:56.484 { 00:08:56.484 "name": "BaseBdev1", 00:08:56.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.484 "is_configured": false, 00:08:56.484 "data_offset": 0, 00:08:56.484 "data_size": 0 00:08:56.484 }, 00:08:56.484 { 00:08:56.484 "name": "BaseBdev2", 00:08:56.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.484 "is_configured": false, 00:08:56.484 "data_offset": 0, 00:08:56.484 "data_size": 0 00:08:56.484 } 00:08:56.484 ] 00:08:56.484 }' 00:08:56.484 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.484 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.052 [2024-11-20 11:18:39.888337] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:57.052 [2024-11-20 11:18:39.888472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.052 [2024-11-20 11:18:39.900335] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:57.052 [2024-11-20 11:18:39.900478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:57.052 [2024-11-20 11:18:39.900531] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:57.052 [2024-11-20 11:18:39.900583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.052 [2024-11-20 11:18:39.953890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.052 BaseBdev1 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.052 [ 00:08:57.052 { 00:08:57.052 "name": "BaseBdev1", 00:08:57.052 "aliases": [ 00:08:57.052 "f9915d19-8896-4303-9491-6582335411ed" 00:08:57.052 ], 00:08:57.052 "product_name": "Malloc disk", 00:08:57.052 "block_size": 512, 00:08:57.052 "num_blocks": 65536, 00:08:57.052 "uuid": "f9915d19-8896-4303-9491-6582335411ed", 00:08:57.052 "assigned_rate_limits": { 00:08:57.052 "rw_ios_per_sec": 0, 00:08:57.052 "rw_mbytes_per_sec": 0, 00:08:57.052 "r_mbytes_per_sec": 0, 00:08:57.052 "w_mbytes_per_sec": 0 00:08:57.052 }, 00:08:57.052 "claimed": true, 00:08:57.052 "claim_type": "exclusive_write", 00:08:57.052 "zoned": false, 00:08:57.052 "supported_io_types": { 00:08:57.052 "read": true, 00:08:57.052 "write": true, 00:08:57.052 "unmap": true, 00:08:57.052 "flush": true, 00:08:57.052 "reset": true, 00:08:57.052 "nvme_admin": false, 00:08:57.052 "nvme_io": false, 00:08:57.052 "nvme_io_md": false, 00:08:57.052 "write_zeroes": true, 00:08:57.052 "zcopy": true, 00:08:57.052 "get_zone_info": false, 00:08:57.052 "zone_management": false, 00:08:57.052 "zone_append": false, 00:08:57.052 "compare": false, 00:08:57.052 "compare_and_write": false, 00:08:57.052 "abort": true, 00:08:57.052 "seek_hole": false, 00:08:57.052 "seek_data": false, 00:08:57.052 "copy": true, 00:08:57.052 "nvme_iov_md": false 00:08:57.052 }, 00:08:57.052 "memory_domains": [ 00:08:57.052 { 00:08:57.052 "dma_device_id": "system", 00:08:57.052 "dma_device_type": 1 00:08:57.052 }, 00:08:57.052 { 00:08:57.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.052 "dma_device_type": 2 00:08:57.052 } 00:08:57.052 ], 00:08:57.052 "driver_specific": {} 00:08:57.052 } 00:08:57.052 ] 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.052 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.053 11:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.053 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.053 11:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.053 11:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.053 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.053 "name": "Existed_Raid", 00:08:57.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.053 "strip_size_kb": 64, 00:08:57.053 "state": "configuring", 00:08:57.053 "raid_level": "concat", 00:08:57.053 "superblock": false, 00:08:57.053 "num_base_bdevs": 2, 00:08:57.053 "num_base_bdevs_discovered": 1, 00:08:57.053 "num_base_bdevs_operational": 2, 00:08:57.053 "base_bdevs_list": [ 00:08:57.053 { 00:08:57.053 "name": "BaseBdev1", 00:08:57.053 "uuid": "f9915d19-8896-4303-9491-6582335411ed", 00:08:57.053 "is_configured": true, 00:08:57.053 "data_offset": 0, 00:08:57.053 "data_size": 65536 00:08:57.053 }, 00:08:57.053 { 00:08:57.053 "name": "BaseBdev2", 00:08:57.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.053 "is_configured": false, 00:08:57.053 "data_offset": 0, 00:08:57.053 "data_size": 0 00:08:57.053 } 00:08:57.053 ] 00:08:57.053 }' 00:08:57.053 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.053 11:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.621 [2024-11-20 11:18:40.477074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:57.621 [2024-11-20 11:18:40.477206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.621 [2024-11-20 11:18:40.489121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.621 [2024-11-20 11:18:40.491258] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:57.621 [2024-11-20 11:18:40.491383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.621 "name": "Existed_Raid", 00:08:57.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.621 "strip_size_kb": 64, 00:08:57.621 "state": "configuring", 00:08:57.621 "raid_level": "concat", 00:08:57.621 "superblock": false, 00:08:57.621 "num_base_bdevs": 2, 00:08:57.621 "num_base_bdevs_discovered": 1, 00:08:57.621 "num_base_bdevs_operational": 2, 00:08:57.621 "base_bdevs_list": [ 00:08:57.621 { 00:08:57.621 "name": "BaseBdev1", 00:08:57.621 "uuid": "f9915d19-8896-4303-9491-6582335411ed", 00:08:57.621 "is_configured": true, 00:08:57.621 "data_offset": 0, 00:08:57.621 "data_size": 65536 00:08:57.621 }, 00:08:57.621 { 00:08:57.621 "name": "BaseBdev2", 00:08:57.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.621 "is_configured": false, 00:08:57.621 "data_offset": 0, 00:08:57.621 "data_size": 0 00:08:57.621 } 00:08:57.621 ] 00:08:57.621 }' 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.621 11:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.881 11:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:57.881 11:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.881 11:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.141 [2024-11-20 11:18:41.006552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:58.141 [2024-11-20 11:18:41.006713] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:58.141 [2024-11-20 11:18:41.006732] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:58.141 [2024-11-20 11:18:41.007055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:58.141 [2024-11-20 11:18:41.007240] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:58.141 [2024-11-20 11:18:41.007256] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:58.141 [2024-11-20 11:18:41.007628] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.141 BaseBdev2 00:08:58.141 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.141 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:58.141 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:58.141 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.141 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:58.141 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.141 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.141 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:58.141 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.141 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.141 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.141 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:58.141 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.141 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.141 [ 00:08:58.141 { 00:08:58.141 "name": "BaseBdev2", 00:08:58.141 "aliases": [ 00:08:58.141 "10408517-40a5-4eb6-b283-e7fa3c616e7f" 00:08:58.141 ], 00:08:58.141 "product_name": "Malloc disk", 00:08:58.141 "block_size": 512, 00:08:58.141 "num_blocks": 65536, 00:08:58.141 "uuid": "10408517-40a5-4eb6-b283-e7fa3c616e7f", 00:08:58.141 "assigned_rate_limits": { 00:08:58.141 "rw_ios_per_sec": 0, 00:08:58.141 "rw_mbytes_per_sec": 0, 00:08:58.141 "r_mbytes_per_sec": 0, 00:08:58.141 "w_mbytes_per_sec": 0 00:08:58.141 }, 00:08:58.141 "claimed": true, 00:08:58.142 "claim_type": "exclusive_write", 00:08:58.142 "zoned": false, 00:08:58.142 "supported_io_types": { 00:08:58.142 "read": true, 00:08:58.142 "write": true, 00:08:58.142 "unmap": true, 00:08:58.142 "flush": true, 00:08:58.142 "reset": true, 00:08:58.142 "nvme_admin": false, 00:08:58.142 "nvme_io": false, 00:08:58.142 "nvme_io_md": false, 00:08:58.142 "write_zeroes": true, 00:08:58.142 "zcopy": true, 00:08:58.142 "get_zone_info": false, 00:08:58.142 "zone_management": false, 00:08:58.142 "zone_append": false, 00:08:58.142 "compare": false, 00:08:58.142 "compare_and_write": false, 00:08:58.142 "abort": true, 00:08:58.142 "seek_hole": false, 00:08:58.142 "seek_data": false, 00:08:58.142 "copy": true, 00:08:58.142 "nvme_iov_md": false 00:08:58.142 }, 00:08:58.142 "memory_domains": [ 00:08:58.142 { 00:08:58.142 "dma_device_id": "system", 00:08:58.142 "dma_device_type": 1 00:08:58.142 }, 00:08:58.142 { 00:08:58.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.142 "dma_device_type": 2 00:08:58.142 } 00:08:58.142 ], 00:08:58.142 "driver_specific": {} 00:08:58.142 } 00:08:58.142 ] 00:08:58.142 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.142 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:58.142 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:58.142 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:58.142 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:58.142 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.142 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.142 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.142 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.142 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:58.142 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.142 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.142 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.142 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.142 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.142 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.142 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.142 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.142 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.142 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.142 "name": "Existed_Raid", 00:08:58.142 "uuid": "4bb90c15-307d-4706-834e-2de651b49ce9", 00:08:58.142 "strip_size_kb": 64, 00:08:58.142 "state": "online", 00:08:58.142 "raid_level": "concat", 00:08:58.142 "superblock": false, 00:08:58.142 "num_base_bdevs": 2, 00:08:58.142 "num_base_bdevs_discovered": 2, 00:08:58.142 "num_base_bdevs_operational": 2, 00:08:58.142 "base_bdevs_list": [ 00:08:58.142 { 00:08:58.142 "name": "BaseBdev1", 00:08:58.142 "uuid": "f9915d19-8896-4303-9491-6582335411ed", 00:08:58.142 "is_configured": true, 00:08:58.142 "data_offset": 0, 00:08:58.142 "data_size": 65536 00:08:58.142 }, 00:08:58.142 { 00:08:58.142 "name": "BaseBdev2", 00:08:58.142 "uuid": "10408517-40a5-4eb6-b283-e7fa3c616e7f", 00:08:58.142 "is_configured": true, 00:08:58.142 "data_offset": 0, 00:08:58.142 "data_size": 65536 00:08:58.142 } 00:08:58.142 ] 00:08:58.142 }' 00:08:58.142 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.142 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.402 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:58.402 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:58.402 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:58.402 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:58.402 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:58.402 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:58.402 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:58.402 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:58.402 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.402 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.402 [2024-11-20 11:18:41.482106] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:58.402 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.661 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:58.661 "name": "Existed_Raid", 00:08:58.661 "aliases": [ 00:08:58.661 "4bb90c15-307d-4706-834e-2de651b49ce9" 00:08:58.661 ], 00:08:58.661 "product_name": "Raid Volume", 00:08:58.661 "block_size": 512, 00:08:58.661 "num_blocks": 131072, 00:08:58.661 "uuid": "4bb90c15-307d-4706-834e-2de651b49ce9", 00:08:58.661 "assigned_rate_limits": { 00:08:58.661 "rw_ios_per_sec": 0, 00:08:58.661 "rw_mbytes_per_sec": 0, 00:08:58.661 "r_mbytes_per_sec": 0, 00:08:58.661 "w_mbytes_per_sec": 0 00:08:58.661 }, 00:08:58.661 "claimed": false, 00:08:58.661 "zoned": false, 00:08:58.661 "supported_io_types": { 00:08:58.661 "read": true, 00:08:58.661 "write": true, 00:08:58.661 "unmap": true, 00:08:58.661 "flush": true, 00:08:58.661 "reset": true, 00:08:58.661 "nvme_admin": false, 00:08:58.661 "nvme_io": false, 00:08:58.661 "nvme_io_md": false, 00:08:58.661 "write_zeroes": true, 00:08:58.661 "zcopy": false, 00:08:58.661 "get_zone_info": false, 00:08:58.661 "zone_management": false, 00:08:58.661 "zone_append": false, 00:08:58.661 "compare": false, 00:08:58.661 "compare_and_write": false, 00:08:58.661 "abort": false, 00:08:58.661 "seek_hole": false, 00:08:58.661 "seek_data": false, 00:08:58.661 "copy": false, 00:08:58.661 "nvme_iov_md": false 00:08:58.661 }, 00:08:58.661 "memory_domains": [ 00:08:58.661 { 00:08:58.662 "dma_device_id": "system", 00:08:58.662 "dma_device_type": 1 00:08:58.662 }, 00:08:58.662 { 00:08:58.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.662 "dma_device_type": 2 00:08:58.662 }, 00:08:58.662 { 00:08:58.662 "dma_device_id": "system", 00:08:58.662 "dma_device_type": 1 00:08:58.662 }, 00:08:58.662 { 00:08:58.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.662 "dma_device_type": 2 00:08:58.662 } 00:08:58.662 ], 00:08:58.662 "driver_specific": { 00:08:58.662 "raid": { 00:08:58.662 "uuid": "4bb90c15-307d-4706-834e-2de651b49ce9", 00:08:58.662 "strip_size_kb": 64, 00:08:58.662 "state": "online", 00:08:58.662 "raid_level": "concat", 00:08:58.662 "superblock": false, 00:08:58.662 "num_base_bdevs": 2, 00:08:58.662 "num_base_bdevs_discovered": 2, 00:08:58.662 "num_base_bdevs_operational": 2, 00:08:58.662 "base_bdevs_list": [ 00:08:58.662 { 00:08:58.662 "name": "BaseBdev1", 00:08:58.662 "uuid": "f9915d19-8896-4303-9491-6582335411ed", 00:08:58.662 "is_configured": true, 00:08:58.662 "data_offset": 0, 00:08:58.662 "data_size": 65536 00:08:58.662 }, 00:08:58.662 { 00:08:58.662 "name": "BaseBdev2", 00:08:58.662 "uuid": "10408517-40a5-4eb6-b283-e7fa3c616e7f", 00:08:58.662 "is_configured": true, 00:08:58.662 "data_offset": 0, 00:08:58.662 "data_size": 65536 00:08:58.662 } 00:08:58.662 ] 00:08:58.662 } 00:08:58.662 } 00:08:58.662 }' 00:08:58.662 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:58.662 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:58.662 BaseBdev2' 00:08:58.662 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.662 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:58.662 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.662 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:58.662 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.662 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.662 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.662 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.662 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.662 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.662 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.662 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.662 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:58.662 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.662 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.662 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.662 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.662 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.662 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:58.662 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.662 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.662 [2024-11-20 11:18:41.701508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:58.662 [2024-11-20 11:18:41.701631] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:58.662 [2024-11-20 11:18:41.701739] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.921 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.921 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:58.921 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:58.921 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:58.921 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:58.921 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:58.921 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:58.921 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.921 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:58.921 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.921 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.921 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:58.921 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.921 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.921 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.921 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.921 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.921 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.921 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.921 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.921 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.921 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.921 "name": "Existed_Raid", 00:08:58.921 "uuid": "4bb90c15-307d-4706-834e-2de651b49ce9", 00:08:58.921 "strip_size_kb": 64, 00:08:58.921 "state": "offline", 00:08:58.921 "raid_level": "concat", 00:08:58.921 "superblock": false, 00:08:58.921 "num_base_bdevs": 2, 00:08:58.921 "num_base_bdevs_discovered": 1, 00:08:58.921 "num_base_bdevs_operational": 1, 00:08:58.921 "base_bdevs_list": [ 00:08:58.921 { 00:08:58.921 "name": null, 00:08:58.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.921 "is_configured": false, 00:08:58.921 "data_offset": 0, 00:08:58.921 "data_size": 65536 00:08:58.921 }, 00:08:58.921 { 00:08:58.921 "name": "BaseBdev2", 00:08:58.921 "uuid": "10408517-40a5-4eb6-b283-e7fa3c616e7f", 00:08:58.921 "is_configured": true, 00:08:58.921 "data_offset": 0, 00:08:58.921 "data_size": 65536 00:08:58.921 } 00:08:58.921 ] 00:08:58.921 }' 00:08:58.921 11:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.921 11:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.199 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:59.199 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:59.199 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.199 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:59.199 11:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.199 11:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.199 11:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.478 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:59.478 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:59.479 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:59.479 11:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.479 11:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.479 [2024-11-20 11:18:42.333005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:59.479 [2024-11-20 11:18:42.333180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:59.479 11:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.479 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:59.479 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:59.479 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.479 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:59.479 11:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.479 11:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.479 11:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.479 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:59.479 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:59.479 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:59.479 11:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61798 00:08:59.479 11:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61798 ']' 00:08:59.479 11:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61798 00:08:59.479 11:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:59.479 11:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:59.479 11:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61798 00:08:59.479 killing process with pid 61798 00:08:59.479 11:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:59.479 11:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:59.479 11:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61798' 00:08:59.479 11:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61798 00:08:59.479 [2024-11-20 11:18:42.542673] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:59.479 11:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61798 00:08:59.479 [2024-11-20 11:18:42.562738] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:00.857 00:09:00.857 real 0m5.368s 00:09:00.857 user 0m7.728s 00:09:00.857 sys 0m0.891s 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.857 ************************************ 00:09:00.857 END TEST raid_state_function_test 00:09:00.857 ************************************ 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.857 11:18:43 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:09:00.857 11:18:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:00.857 11:18:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.857 11:18:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:00.857 ************************************ 00:09:00.857 START TEST raid_state_function_test_sb 00:09:00.857 ************************************ 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62051 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62051' 00:09:00.857 Process raid pid: 62051 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62051 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62051 ']' 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.857 11:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.857 [2024-11-20 11:18:43.941992] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:09:00.857 [2024-11-20 11:18:43.942218] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.117 [2024-11-20 11:18:44.121624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.376 [2024-11-20 11:18:44.257036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.635 [2024-11-20 11:18:44.492790] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.635 [2024-11-20 11:18:44.492898] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.894 11:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.894 11:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:01.894 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:01.894 11:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.894 11:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.894 [2024-11-20 11:18:44.853216] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:01.894 [2024-11-20 11:18:44.853378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:01.894 [2024-11-20 11:18:44.853420] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:01.894 [2024-11-20 11:18:44.853463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:01.894 11:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.894 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:01.894 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.894 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.894 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.894 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.894 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:01.894 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.894 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.894 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.894 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.894 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.894 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.894 11:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.894 11:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.894 11:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.894 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.894 "name": "Existed_Raid", 00:09:01.894 "uuid": "9016b8fc-75ef-4062-be69-5aa803e1645b", 00:09:01.894 "strip_size_kb": 64, 00:09:01.894 "state": "configuring", 00:09:01.894 "raid_level": "concat", 00:09:01.894 "superblock": true, 00:09:01.894 "num_base_bdevs": 2, 00:09:01.894 "num_base_bdevs_discovered": 0, 00:09:01.894 "num_base_bdevs_operational": 2, 00:09:01.894 "base_bdevs_list": [ 00:09:01.894 { 00:09:01.894 "name": "BaseBdev1", 00:09:01.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.894 "is_configured": false, 00:09:01.894 "data_offset": 0, 00:09:01.894 "data_size": 0 00:09:01.894 }, 00:09:01.894 { 00:09:01.894 "name": "BaseBdev2", 00:09:01.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.894 "is_configured": false, 00:09:01.894 "data_offset": 0, 00:09:01.894 "data_size": 0 00:09:01.894 } 00:09:01.894 ] 00:09:01.894 }' 00:09:01.894 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.894 11:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.462 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:02.462 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.462 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.462 [2024-11-20 11:18:45.332339] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:02.463 [2024-11-20 11:18:45.332474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.463 [2024-11-20 11:18:45.340344] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:02.463 [2024-11-20 11:18:45.340476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:02.463 [2024-11-20 11:18:45.340526] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:02.463 [2024-11-20 11:18:45.340559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.463 [2024-11-20 11:18:45.392505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:02.463 BaseBdev1 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.463 [ 00:09:02.463 { 00:09:02.463 "name": "BaseBdev1", 00:09:02.463 "aliases": [ 00:09:02.463 "2cb7ad1c-2afd-4848-8dcc-485ba12b3098" 00:09:02.463 ], 00:09:02.463 "product_name": "Malloc disk", 00:09:02.463 "block_size": 512, 00:09:02.463 "num_blocks": 65536, 00:09:02.463 "uuid": "2cb7ad1c-2afd-4848-8dcc-485ba12b3098", 00:09:02.463 "assigned_rate_limits": { 00:09:02.463 "rw_ios_per_sec": 0, 00:09:02.463 "rw_mbytes_per_sec": 0, 00:09:02.463 "r_mbytes_per_sec": 0, 00:09:02.463 "w_mbytes_per_sec": 0 00:09:02.463 }, 00:09:02.463 "claimed": true, 00:09:02.463 "claim_type": "exclusive_write", 00:09:02.463 "zoned": false, 00:09:02.463 "supported_io_types": { 00:09:02.463 "read": true, 00:09:02.463 "write": true, 00:09:02.463 "unmap": true, 00:09:02.463 "flush": true, 00:09:02.463 "reset": true, 00:09:02.463 "nvme_admin": false, 00:09:02.463 "nvme_io": false, 00:09:02.463 "nvme_io_md": false, 00:09:02.463 "write_zeroes": true, 00:09:02.463 "zcopy": true, 00:09:02.463 "get_zone_info": false, 00:09:02.463 "zone_management": false, 00:09:02.463 "zone_append": false, 00:09:02.463 "compare": false, 00:09:02.463 "compare_and_write": false, 00:09:02.463 "abort": true, 00:09:02.463 "seek_hole": false, 00:09:02.463 "seek_data": false, 00:09:02.463 "copy": true, 00:09:02.463 "nvme_iov_md": false 00:09:02.463 }, 00:09:02.463 "memory_domains": [ 00:09:02.463 { 00:09:02.463 "dma_device_id": "system", 00:09:02.463 "dma_device_type": 1 00:09:02.463 }, 00:09:02.463 { 00:09:02.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.463 "dma_device_type": 2 00:09:02.463 } 00:09:02.463 ], 00:09:02.463 "driver_specific": {} 00:09:02.463 } 00:09:02.463 ] 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.463 "name": "Existed_Raid", 00:09:02.463 "uuid": "c371f0d7-6f30-419d-84fd-899ef175a89e", 00:09:02.463 "strip_size_kb": 64, 00:09:02.463 "state": "configuring", 00:09:02.463 "raid_level": "concat", 00:09:02.463 "superblock": true, 00:09:02.463 "num_base_bdevs": 2, 00:09:02.463 "num_base_bdevs_discovered": 1, 00:09:02.463 "num_base_bdevs_operational": 2, 00:09:02.463 "base_bdevs_list": [ 00:09:02.463 { 00:09:02.463 "name": "BaseBdev1", 00:09:02.463 "uuid": "2cb7ad1c-2afd-4848-8dcc-485ba12b3098", 00:09:02.463 "is_configured": true, 00:09:02.463 "data_offset": 2048, 00:09:02.463 "data_size": 63488 00:09:02.463 }, 00:09:02.463 { 00:09:02.463 "name": "BaseBdev2", 00:09:02.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.463 "is_configured": false, 00:09:02.463 "data_offset": 0, 00:09:02.463 "data_size": 0 00:09:02.463 } 00:09:02.463 ] 00:09:02.463 }' 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.463 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.031 [2024-11-20 11:18:45.867779] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:03.031 [2024-11-20 11:18:45.867940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.031 [2024-11-20 11:18:45.879851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.031 [2024-11-20 11:18:45.882098] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:03.031 [2024-11-20 11:18:45.882216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.031 "name": "Existed_Raid", 00:09:03.031 "uuid": "ce90b3a3-cb82-465e-b0ad-3e06113c6f42", 00:09:03.031 "strip_size_kb": 64, 00:09:03.031 "state": "configuring", 00:09:03.031 "raid_level": "concat", 00:09:03.031 "superblock": true, 00:09:03.031 "num_base_bdevs": 2, 00:09:03.031 "num_base_bdevs_discovered": 1, 00:09:03.031 "num_base_bdevs_operational": 2, 00:09:03.031 "base_bdevs_list": [ 00:09:03.031 { 00:09:03.031 "name": "BaseBdev1", 00:09:03.031 "uuid": "2cb7ad1c-2afd-4848-8dcc-485ba12b3098", 00:09:03.031 "is_configured": true, 00:09:03.031 "data_offset": 2048, 00:09:03.031 "data_size": 63488 00:09:03.031 }, 00:09:03.031 { 00:09:03.031 "name": "BaseBdev2", 00:09:03.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.031 "is_configured": false, 00:09:03.031 "data_offset": 0, 00:09:03.031 "data_size": 0 00:09:03.031 } 00:09:03.031 ] 00:09:03.031 }' 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.031 11:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.290 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:03.290 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.290 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.548 [2024-11-20 11:18:46.417237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:03.548 [2024-11-20 11:18:46.417689] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:03.548 [2024-11-20 11:18:46.417753] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:03.548 BaseBdev2 00:09:03.548 [2024-11-20 11:18:46.418088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:03.548 [2024-11-20 11:18:46.418266] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:03.548 [2024-11-20 11:18:46.418283] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:03.548 [2024-11-20 11:18:46.418486] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.548 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.548 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:03.548 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:03.548 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:03.548 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:03.548 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:03.548 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:03.548 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:03.548 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.548 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.548 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.548 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:03.548 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.548 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.548 [ 00:09:03.548 { 00:09:03.548 "name": "BaseBdev2", 00:09:03.548 "aliases": [ 00:09:03.548 "3973443b-cd60-4d16-b094-c05c05c57b74" 00:09:03.548 ], 00:09:03.548 "product_name": "Malloc disk", 00:09:03.548 "block_size": 512, 00:09:03.548 "num_blocks": 65536, 00:09:03.548 "uuid": "3973443b-cd60-4d16-b094-c05c05c57b74", 00:09:03.548 "assigned_rate_limits": { 00:09:03.548 "rw_ios_per_sec": 0, 00:09:03.548 "rw_mbytes_per_sec": 0, 00:09:03.548 "r_mbytes_per_sec": 0, 00:09:03.548 "w_mbytes_per_sec": 0 00:09:03.548 }, 00:09:03.548 "claimed": true, 00:09:03.548 "claim_type": "exclusive_write", 00:09:03.548 "zoned": false, 00:09:03.548 "supported_io_types": { 00:09:03.548 "read": true, 00:09:03.548 "write": true, 00:09:03.548 "unmap": true, 00:09:03.548 "flush": true, 00:09:03.548 "reset": true, 00:09:03.548 "nvme_admin": false, 00:09:03.548 "nvme_io": false, 00:09:03.548 "nvme_io_md": false, 00:09:03.548 "write_zeroes": true, 00:09:03.548 "zcopy": true, 00:09:03.548 "get_zone_info": false, 00:09:03.548 "zone_management": false, 00:09:03.548 "zone_append": false, 00:09:03.548 "compare": false, 00:09:03.548 "compare_and_write": false, 00:09:03.549 "abort": true, 00:09:03.549 "seek_hole": false, 00:09:03.549 "seek_data": false, 00:09:03.549 "copy": true, 00:09:03.549 "nvme_iov_md": false 00:09:03.549 }, 00:09:03.549 "memory_domains": [ 00:09:03.549 { 00:09:03.549 "dma_device_id": "system", 00:09:03.549 "dma_device_type": 1 00:09:03.549 }, 00:09:03.549 { 00:09:03.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.549 "dma_device_type": 2 00:09:03.549 } 00:09:03.549 ], 00:09:03.549 "driver_specific": {} 00:09:03.549 } 00:09:03.549 ] 00:09:03.549 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.549 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:03.549 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:03.549 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:03.549 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:03.549 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.549 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.549 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.549 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.549 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:03.549 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.549 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.549 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.549 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.549 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.549 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.549 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.549 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.549 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.549 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.549 "name": "Existed_Raid", 00:09:03.549 "uuid": "ce90b3a3-cb82-465e-b0ad-3e06113c6f42", 00:09:03.549 "strip_size_kb": 64, 00:09:03.549 "state": "online", 00:09:03.549 "raid_level": "concat", 00:09:03.549 "superblock": true, 00:09:03.549 "num_base_bdevs": 2, 00:09:03.549 "num_base_bdevs_discovered": 2, 00:09:03.549 "num_base_bdevs_operational": 2, 00:09:03.549 "base_bdevs_list": [ 00:09:03.549 { 00:09:03.549 "name": "BaseBdev1", 00:09:03.549 "uuid": "2cb7ad1c-2afd-4848-8dcc-485ba12b3098", 00:09:03.549 "is_configured": true, 00:09:03.549 "data_offset": 2048, 00:09:03.549 "data_size": 63488 00:09:03.549 }, 00:09:03.549 { 00:09:03.549 "name": "BaseBdev2", 00:09:03.549 "uuid": "3973443b-cd60-4d16-b094-c05c05c57b74", 00:09:03.549 "is_configured": true, 00:09:03.549 "data_offset": 2048, 00:09:03.549 "data_size": 63488 00:09:03.549 } 00:09:03.549 ] 00:09:03.549 }' 00:09:03.549 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.549 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.116 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:04.116 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:04.116 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:04.116 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:04.116 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:04.116 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:04.116 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:04.116 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:04.116 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.116 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.116 [2024-11-20 11:18:46.948778] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.116 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.116 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:04.116 "name": "Existed_Raid", 00:09:04.116 "aliases": [ 00:09:04.116 "ce90b3a3-cb82-465e-b0ad-3e06113c6f42" 00:09:04.116 ], 00:09:04.116 "product_name": "Raid Volume", 00:09:04.116 "block_size": 512, 00:09:04.116 "num_blocks": 126976, 00:09:04.116 "uuid": "ce90b3a3-cb82-465e-b0ad-3e06113c6f42", 00:09:04.116 "assigned_rate_limits": { 00:09:04.116 "rw_ios_per_sec": 0, 00:09:04.116 "rw_mbytes_per_sec": 0, 00:09:04.116 "r_mbytes_per_sec": 0, 00:09:04.116 "w_mbytes_per_sec": 0 00:09:04.116 }, 00:09:04.116 "claimed": false, 00:09:04.116 "zoned": false, 00:09:04.116 "supported_io_types": { 00:09:04.116 "read": true, 00:09:04.116 "write": true, 00:09:04.116 "unmap": true, 00:09:04.116 "flush": true, 00:09:04.116 "reset": true, 00:09:04.116 "nvme_admin": false, 00:09:04.116 "nvme_io": false, 00:09:04.116 "nvme_io_md": false, 00:09:04.116 "write_zeroes": true, 00:09:04.116 "zcopy": false, 00:09:04.116 "get_zone_info": false, 00:09:04.116 "zone_management": false, 00:09:04.116 "zone_append": false, 00:09:04.116 "compare": false, 00:09:04.116 "compare_and_write": false, 00:09:04.116 "abort": false, 00:09:04.117 "seek_hole": false, 00:09:04.117 "seek_data": false, 00:09:04.117 "copy": false, 00:09:04.117 "nvme_iov_md": false 00:09:04.117 }, 00:09:04.117 "memory_domains": [ 00:09:04.117 { 00:09:04.117 "dma_device_id": "system", 00:09:04.117 "dma_device_type": 1 00:09:04.117 }, 00:09:04.117 { 00:09:04.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.117 "dma_device_type": 2 00:09:04.117 }, 00:09:04.117 { 00:09:04.117 "dma_device_id": "system", 00:09:04.117 "dma_device_type": 1 00:09:04.117 }, 00:09:04.117 { 00:09:04.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.117 "dma_device_type": 2 00:09:04.117 } 00:09:04.117 ], 00:09:04.117 "driver_specific": { 00:09:04.117 "raid": { 00:09:04.117 "uuid": "ce90b3a3-cb82-465e-b0ad-3e06113c6f42", 00:09:04.117 "strip_size_kb": 64, 00:09:04.117 "state": "online", 00:09:04.117 "raid_level": "concat", 00:09:04.117 "superblock": true, 00:09:04.117 "num_base_bdevs": 2, 00:09:04.117 "num_base_bdevs_discovered": 2, 00:09:04.117 "num_base_bdevs_operational": 2, 00:09:04.117 "base_bdevs_list": [ 00:09:04.117 { 00:09:04.117 "name": "BaseBdev1", 00:09:04.117 "uuid": "2cb7ad1c-2afd-4848-8dcc-485ba12b3098", 00:09:04.117 "is_configured": true, 00:09:04.117 "data_offset": 2048, 00:09:04.117 "data_size": 63488 00:09:04.117 }, 00:09:04.117 { 00:09:04.117 "name": "BaseBdev2", 00:09:04.117 "uuid": "3973443b-cd60-4d16-b094-c05c05c57b74", 00:09:04.117 "is_configured": true, 00:09:04.117 "data_offset": 2048, 00:09:04.117 "data_size": 63488 00:09:04.117 } 00:09:04.117 ] 00:09:04.117 } 00:09:04.117 } 00:09:04.117 }' 00:09:04.117 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:04.117 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:04.117 BaseBdev2' 00:09:04.117 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.117 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:04.117 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.117 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:04.117 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.117 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.117 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.117 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.117 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.117 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.117 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.117 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:04.117 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.117 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.117 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.117 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.117 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.117 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.117 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:04.117 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.117 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.117 [2024-11-20 11:18:47.200100] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:04.117 [2024-11-20 11:18:47.200331] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:04.117 [2024-11-20 11:18:47.200479] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.375 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.375 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:04.375 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:04.375 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:04.375 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:04.375 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:04.375 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:04.375 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.375 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:04.375 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.375 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.375 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:04.375 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.375 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.375 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.375 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.375 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.375 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.375 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.375 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.375 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.375 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.375 "name": "Existed_Raid", 00:09:04.375 "uuid": "ce90b3a3-cb82-465e-b0ad-3e06113c6f42", 00:09:04.375 "strip_size_kb": 64, 00:09:04.375 "state": "offline", 00:09:04.375 "raid_level": "concat", 00:09:04.375 "superblock": true, 00:09:04.375 "num_base_bdevs": 2, 00:09:04.375 "num_base_bdevs_discovered": 1, 00:09:04.375 "num_base_bdevs_operational": 1, 00:09:04.375 "base_bdevs_list": [ 00:09:04.375 { 00:09:04.375 "name": null, 00:09:04.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.375 "is_configured": false, 00:09:04.375 "data_offset": 0, 00:09:04.375 "data_size": 63488 00:09:04.375 }, 00:09:04.375 { 00:09:04.375 "name": "BaseBdev2", 00:09:04.375 "uuid": "3973443b-cd60-4d16-b094-c05c05c57b74", 00:09:04.375 "is_configured": true, 00:09:04.375 "data_offset": 2048, 00:09:04.375 "data_size": 63488 00:09:04.375 } 00:09:04.375 ] 00:09:04.375 }' 00:09:04.375 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.375 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.951 [2024-11-20 11:18:47.820087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:04.951 [2024-11-20 11:18:47.820264] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62051 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62051 ']' 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62051 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:04.951 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62051 00:09:04.951 11:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:04.951 11:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:04.951 11:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62051' 00:09:04.951 killing process with pid 62051 00:09:04.951 11:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62051 00:09:04.951 [2024-11-20 11:18:48.022782] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:04.951 11:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62051 00:09:04.951 [2024-11-20 11:18:48.043088] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:06.325 11:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:06.325 00:09:06.325 real 0m5.475s 00:09:06.325 user 0m7.886s 00:09:06.325 sys 0m0.875s 00:09:06.325 11:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.325 ************************************ 00:09:06.325 END TEST raid_state_function_test_sb 00:09:06.325 ************************************ 00:09:06.325 11:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.325 11:18:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:09:06.325 11:18:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:06.325 11:18:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.325 11:18:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:06.325 ************************************ 00:09:06.325 START TEST raid_superblock_test 00:09:06.325 ************************************ 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62303 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62303 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62303 ']' 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.325 11:18:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.583 [2024-11-20 11:18:49.486977] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:09:06.583 [2024-11-20 11:18:49.487125] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62303 ] 00:09:06.583 [2024-11-20 11:18:49.667301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.840 [2024-11-20 11:18:49.798997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.097 [2024-11-20 11:18:50.036293] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.097 [2024-11-20 11:18:50.036370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.355 11:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.355 11:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:07.355 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:07.355 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:07.355 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:07.355 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:07.355 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:07.355 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:07.355 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:07.355 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:07.355 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:07.355 11:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.355 11:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.614 malloc1 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.614 [2024-11-20 11:18:50.495180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:07.614 [2024-11-20 11:18:50.495366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.614 [2024-11-20 11:18:50.495449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:07.614 [2024-11-20 11:18:50.495525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.614 [2024-11-20 11:18:50.498201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.614 [2024-11-20 11:18:50.498324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:07.614 pt1 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.614 malloc2 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.614 [2024-11-20 11:18:50.562299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:07.614 [2024-11-20 11:18:50.562371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.614 [2024-11-20 11:18:50.562400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:07.614 [2024-11-20 11:18:50.562410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.614 [2024-11-20 11:18:50.564888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.614 [2024-11-20 11:18:50.564931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:07.614 pt2 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.614 [2024-11-20 11:18:50.574388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:07.614 [2024-11-20 11:18:50.576599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:07.614 [2024-11-20 11:18:50.576873] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:07.614 [2024-11-20 11:18:50.576932] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:07.614 [2024-11-20 11:18:50.577269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:07.614 [2024-11-20 11:18:50.577479] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:07.614 [2024-11-20 11:18:50.577500] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:07.614 [2024-11-20 11:18:50.577765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.614 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.614 "name": "raid_bdev1", 00:09:07.614 "uuid": "62a4f0bc-26e7-442e-9c31-4d0d3b5febc2", 00:09:07.614 "strip_size_kb": 64, 00:09:07.614 "state": "online", 00:09:07.614 "raid_level": "concat", 00:09:07.614 "superblock": true, 00:09:07.614 "num_base_bdevs": 2, 00:09:07.614 "num_base_bdevs_discovered": 2, 00:09:07.614 "num_base_bdevs_operational": 2, 00:09:07.614 "base_bdevs_list": [ 00:09:07.614 { 00:09:07.614 "name": "pt1", 00:09:07.615 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:07.615 "is_configured": true, 00:09:07.615 "data_offset": 2048, 00:09:07.615 "data_size": 63488 00:09:07.615 }, 00:09:07.615 { 00:09:07.615 "name": "pt2", 00:09:07.615 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:07.615 "is_configured": true, 00:09:07.615 "data_offset": 2048, 00:09:07.615 "data_size": 63488 00:09:07.615 } 00:09:07.615 ] 00:09:07.615 }' 00:09:07.615 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.615 11:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.181 [2024-11-20 11:18:51.037882] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:08.181 "name": "raid_bdev1", 00:09:08.181 "aliases": [ 00:09:08.181 "62a4f0bc-26e7-442e-9c31-4d0d3b5febc2" 00:09:08.181 ], 00:09:08.181 "product_name": "Raid Volume", 00:09:08.181 "block_size": 512, 00:09:08.181 "num_blocks": 126976, 00:09:08.181 "uuid": "62a4f0bc-26e7-442e-9c31-4d0d3b5febc2", 00:09:08.181 "assigned_rate_limits": { 00:09:08.181 "rw_ios_per_sec": 0, 00:09:08.181 "rw_mbytes_per_sec": 0, 00:09:08.181 "r_mbytes_per_sec": 0, 00:09:08.181 "w_mbytes_per_sec": 0 00:09:08.181 }, 00:09:08.181 "claimed": false, 00:09:08.181 "zoned": false, 00:09:08.181 "supported_io_types": { 00:09:08.181 "read": true, 00:09:08.181 "write": true, 00:09:08.181 "unmap": true, 00:09:08.181 "flush": true, 00:09:08.181 "reset": true, 00:09:08.181 "nvme_admin": false, 00:09:08.181 "nvme_io": false, 00:09:08.181 "nvme_io_md": false, 00:09:08.181 "write_zeroes": true, 00:09:08.181 "zcopy": false, 00:09:08.181 "get_zone_info": false, 00:09:08.181 "zone_management": false, 00:09:08.181 "zone_append": false, 00:09:08.181 "compare": false, 00:09:08.181 "compare_and_write": false, 00:09:08.181 "abort": false, 00:09:08.181 "seek_hole": false, 00:09:08.181 "seek_data": false, 00:09:08.181 "copy": false, 00:09:08.181 "nvme_iov_md": false 00:09:08.181 }, 00:09:08.181 "memory_domains": [ 00:09:08.181 { 00:09:08.181 "dma_device_id": "system", 00:09:08.181 "dma_device_type": 1 00:09:08.181 }, 00:09:08.181 { 00:09:08.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.181 "dma_device_type": 2 00:09:08.181 }, 00:09:08.181 { 00:09:08.181 "dma_device_id": "system", 00:09:08.181 "dma_device_type": 1 00:09:08.181 }, 00:09:08.181 { 00:09:08.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.181 "dma_device_type": 2 00:09:08.181 } 00:09:08.181 ], 00:09:08.181 "driver_specific": { 00:09:08.181 "raid": { 00:09:08.181 "uuid": "62a4f0bc-26e7-442e-9c31-4d0d3b5febc2", 00:09:08.181 "strip_size_kb": 64, 00:09:08.181 "state": "online", 00:09:08.181 "raid_level": "concat", 00:09:08.181 "superblock": true, 00:09:08.181 "num_base_bdevs": 2, 00:09:08.181 "num_base_bdevs_discovered": 2, 00:09:08.181 "num_base_bdevs_operational": 2, 00:09:08.181 "base_bdevs_list": [ 00:09:08.181 { 00:09:08.181 "name": "pt1", 00:09:08.181 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:08.181 "is_configured": true, 00:09:08.181 "data_offset": 2048, 00:09:08.181 "data_size": 63488 00:09:08.181 }, 00:09:08.181 { 00:09:08.181 "name": "pt2", 00:09:08.181 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:08.181 "is_configured": true, 00:09:08.181 "data_offset": 2048, 00:09:08.181 "data_size": 63488 00:09:08.181 } 00:09:08.181 ] 00:09:08.181 } 00:09:08.181 } 00:09:08.181 }' 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:08.181 pt2' 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.181 [2024-11-20 11:18:51.265499] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.181 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=62a4f0bc-26e7-442e-9c31-4d0d3b5febc2 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 62a4f0bc-26e7-442e-9c31-4d0d3b5febc2 ']' 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.440 [2024-11-20 11:18:51.309058] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:08.440 [2024-11-20 11:18:51.309095] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:08.440 [2024-11-20 11:18:51.309204] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:08.440 [2024-11-20 11:18:51.309277] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:08.440 [2024-11-20 11:18:51.309294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:08.440 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.441 [2024-11-20 11:18:51.452885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:08.441 [2024-11-20 11:18:51.455179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:08.441 [2024-11-20 11:18:51.455277] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:08.441 [2024-11-20 11:18:51.455347] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:08.441 [2024-11-20 11:18:51.455365] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:08.441 [2024-11-20 11:18:51.455378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:08.441 request: 00:09:08.441 { 00:09:08.441 "name": "raid_bdev1", 00:09:08.441 "raid_level": "concat", 00:09:08.441 "base_bdevs": [ 00:09:08.441 "malloc1", 00:09:08.441 "malloc2" 00:09:08.441 ], 00:09:08.441 "strip_size_kb": 64, 00:09:08.441 "superblock": false, 00:09:08.441 "method": "bdev_raid_create", 00:09:08.441 "req_id": 1 00:09:08.441 } 00:09:08.441 Got JSON-RPC error response 00:09:08.441 response: 00:09:08.441 { 00:09:08.441 "code": -17, 00:09:08.441 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:08.441 } 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.441 [2024-11-20 11:18:51.524747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:08.441 [2024-11-20 11:18:51.524930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.441 [2024-11-20 11:18:51.524986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:08.441 [2024-11-20 11:18:51.525026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.441 [2024-11-20 11:18:51.527635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.441 [2024-11-20 11:18:51.527775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:08.441 [2024-11-20 11:18:51.527926] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:08.441 [2024-11-20 11:18:51.528051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:08.441 pt1 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:08.441 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.699 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.699 "name": "raid_bdev1", 00:09:08.699 "uuid": "62a4f0bc-26e7-442e-9c31-4d0d3b5febc2", 00:09:08.699 "strip_size_kb": 64, 00:09:08.699 "state": "configuring", 00:09:08.699 "raid_level": "concat", 00:09:08.699 "superblock": true, 00:09:08.699 "num_base_bdevs": 2, 00:09:08.699 "num_base_bdevs_discovered": 1, 00:09:08.699 "num_base_bdevs_operational": 2, 00:09:08.699 "base_bdevs_list": [ 00:09:08.699 { 00:09:08.699 "name": "pt1", 00:09:08.699 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:08.699 "is_configured": true, 00:09:08.699 "data_offset": 2048, 00:09:08.699 "data_size": 63488 00:09:08.699 }, 00:09:08.699 { 00:09:08.699 "name": null, 00:09:08.699 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:08.699 "is_configured": false, 00:09:08.699 "data_offset": 2048, 00:09:08.699 "data_size": 63488 00:09:08.699 } 00:09:08.699 ] 00:09:08.699 }' 00:09:08.699 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.699 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.958 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:08.958 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:08.958 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:08.958 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:08.958 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.958 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.958 [2024-11-20 11:18:51.984103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:08.958 [2024-11-20 11:18:51.984372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.958 [2024-11-20 11:18:51.984519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:08.958 [2024-11-20 11:18:51.984614] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.958 [2024-11-20 11:18:51.985647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.958 [2024-11-20 11:18:51.985795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:08.958 [2024-11-20 11:18:51.986044] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:08.958 [2024-11-20 11:18:51.986164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:08.958 [2024-11-20 11:18:51.986500] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:08.958 [2024-11-20 11:18:51.986596] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:08.958 [2024-11-20 11:18:51.987180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:08.958 [2024-11-20 11:18:51.987624] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:08.958 [2024-11-20 11:18:51.987725] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:08.958 [2024-11-20 11:18:51.988244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.958 pt2 00:09:08.958 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.958 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:08.958 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:08.958 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:08.958 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:08.958 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.958 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.958 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.958 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:08.958 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.958 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.958 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.958 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.958 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.958 11:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.958 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:08.958 11:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.958 11:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.958 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.958 "name": "raid_bdev1", 00:09:08.958 "uuid": "62a4f0bc-26e7-442e-9c31-4d0d3b5febc2", 00:09:08.958 "strip_size_kb": 64, 00:09:08.958 "state": "online", 00:09:08.958 "raid_level": "concat", 00:09:08.958 "superblock": true, 00:09:08.958 "num_base_bdevs": 2, 00:09:08.958 "num_base_bdevs_discovered": 2, 00:09:08.958 "num_base_bdevs_operational": 2, 00:09:08.958 "base_bdevs_list": [ 00:09:08.958 { 00:09:08.958 "name": "pt1", 00:09:08.958 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:08.958 "is_configured": true, 00:09:08.958 "data_offset": 2048, 00:09:08.958 "data_size": 63488 00:09:08.958 }, 00:09:08.958 { 00:09:08.958 "name": "pt2", 00:09:08.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:08.958 "is_configured": true, 00:09:08.958 "data_offset": 2048, 00:09:08.958 "data_size": 63488 00:09:08.958 } 00:09:08.958 ] 00:09:08.958 }' 00:09:08.958 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.958 11:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.531 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:09.531 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:09.531 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:09.531 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:09.531 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:09.531 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:09.531 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:09.531 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:09.531 11:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.531 11:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.531 [2024-11-20 11:18:52.451939] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.531 11:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.531 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:09.531 "name": "raid_bdev1", 00:09:09.531 "aliases": [ 00:09:09.531 "62a4f0bc-26e7-442e-9c31-4d0d3b5febc2" 00:09:09.531 ], 00:09:09.531 "product_name": "Raid Volume", 00:09:09.531 "block_size": 512, 00:09:09.531 "num_blocks": 126976, 00:09:09.531 "uuid": "62a4f0bc-26e7-442e-9c31-4d0d3b5febc2", 00:09:09.531 "assigned_rate_limits": { 00:09:09.531 "rw_ios_per_sec": 0, 00:09:09.531 "rw_mbytes_per_sec": 0, 00:09:09.531 "r_mbytes_per_sec": 0, 00:09:09.531 "w_mbytes_per_sec": 0 00:09:09.531 }, 00:09:09.531 "claimed": false, 00:09:09.531 "zoned": false, 00:09:09.531 "supported_io_types": { 00:09:09.531 "read": true, 00:09:09.531 "write": true, 00:09:09.531 "unmap": true, 00:09:09.531 "flush": true, 00:09:09.531 "reset": true, 00:09:09.531 "nvme_admin": false, 00:09:09.531 "nvme_io": false, 00:09:09.531 "nvme_io_md": false, 00:09:09.531 "write_zeroes": true, 00:09:09.531 "zcopy": false, 00:09:09.531 "get_zone_info": false, 00:09:09.531 "zone_management": false, 00:09:09.531 "zone_append": false, 00:09:09.531 "compare": false, 00:09:09.531 "compare_and_write": false, 00:09:09.531 "abort": false, 00:09:09.531 "seek_hole": false, 00:09:09.531 "seek_data": false, 00:09:09.531 "copy": false, 00:09:09.531 "nvme_iov_md": false 00:09:09.531 }, 00:09:09.531 "memory_domains": [ 00:09:09.531 { 00:09:09.531 "dma_device_id": "system", 00:09:09.531 "dma_device_type": 1 00:09:09.531 }, 00:09:09.531 { 00:09:09.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.531 "dma_device_type": 2 00:09:09.531 }, 00:09:09.531 { 00:09:09.531 "dma_device_id": "system", 00:09:09.531 "dma_device_type": 1 00:09:09.531 }, 00:09:09.531 { 00:09:09.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.531 "dma_device_type": 2 00:09:09.531 } 00:09:09.531 ], 00:09:09.531 "driver_specific": { 00:09:09.531 "raid": { 00:09:09.531 "uuid": "62a4f0bc-26e7-442e-9c31-4d0d3b5febc2", 00:09:09.531 "strip_size_kb": 64, 00:09:09.531 "state": "online", 00:09:09.531 "raid_level": "concat", 00:09:09.531 "superblock": true, 00:09:09.531 "num_base_bdevs": 2, 00:09:09.531 "num_base_bdevs_discovered": 2, 00:09:09.531 "num_base_bdevs_operational": 2, 00:09:09.531 "base_bdevs_list": [ 00:09:09.531 { 00:09:09.531 "name": "pt1", 00:09:09.531 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:09.531 "is_configured": true, 00:09:09.531 "data_offset": 2048, 00:09:09.531 "data_size": 63488 00:09:09.531 }, 00:09:09.531 { 00:09:09.531 "name": "pt2", 00:09:09.531 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:09.531 "is_configured": true, 00:09:09.531 "data_offset": 2048, 00:09:09.531 "data_size": 63488 00:09:09.531 } 00:09:09.531 ] 00:09:09.531 } 00:09:09.531 } 00:09:09.531 }' 00:09:09.531 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:09.531 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:09.531 pt2' 00:09:09.531 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.531 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:09.531 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.531 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:09.531 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.531 11:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.531 11:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.531 11:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.531 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.532 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.532 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:09.791 [2024-11-20 11:18:52.703965] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 62a4f0bc-26e7-442e-9c31-4d0d3b5febc2 '!=' 62a4f0bc-26e7-442e-9c31-4d0d3b5febc2 ']' 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62303 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62303 ']' 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62303 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62303 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62303' 00:09:09.791 killing process with pid 62303 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62303 00:09:09.791 [2024-11-20 11:18:52.790441] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:09.791 [2024-11-20 11:18:52.790648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.791 11:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62303 00:09:09.791 [2024-11-20 11:18:52.790741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:09.791 [2024-11-20 11:18:52.790757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:10.049 [2024-11-20 11:18:53.037945] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:11.427 11:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:11.427 00:09:11.427 real 0m4.914s 00:09:11.427 user 0m6.873s 00:09:11.427 sys 0m0.792s 00:09:11.427 11:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.427 11:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.427 ************************************ 00:09:11.427 END TEST raid_superblock_test 00:09:11.427 ************************************ 00:09:11.427 11:18:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:09:11.427 11:18:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:11.427 11:18:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.427 11:18:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:11.427 ************************************ 00:09:11.427 START TEST raid_read_error_test 00:09:11.427 ************************************ 00:09:11.427 11:18:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:09:11.427 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:11.427 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:11.427 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:11.427 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:11.427 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:11.427 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:11.427 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:11.427 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:11.427 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:11.427 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:11.427 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:11.427 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:11.427 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:11.427 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:11.427 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:11.427 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:11.427 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:11.427 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:11.427 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:11.428 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:11.428 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:11.428 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:11.428 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HgjqA7RRDU 00:09:11.428 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62520 00:09:11.428 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:11.428 11:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62520 00:09:11.428 11:18:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62520 ']' 00:09:11.428 11:18:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.428 11:18:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.428 11:18:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.428 11:18:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.428 11:18:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.428 [2024-11-20 11:18:54.477023] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:09:11.428 [2024-11-20 11:18:54.477247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62520 ] 00:09:11.687 [2024-11-20 11:18:54.652235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.687 [2024-11-20 11:18:54.779067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.984 [2024-11-20 11:18:54.997737] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:11.984 [2024-11-20 11:18:54.997885] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.575 BaseBdev1_malloc 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.575 true 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.575 [2024-11-20 11:18:55.462672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:12.575 [2024-11-20 11:18:55.462753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.575 [2024-11-20 11:18:55.462780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:12.575 [2024-11-20 11:18:55.462792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.575 [2024-11-20 11:18:55.465313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.575 [2024-11-20 11:18:55.465364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:12.575 BaseBdev1 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.575 BaseBdev2_malloc 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.575 true 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.575 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.575 [2024-11-20 11:18:55.534002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:12.575 [2024-11-20 11:18:55.534078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.575 [2024-11-20 11:18:55.534103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:12.575 [2024-11-20 11:18:55.534114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.576 [2024-11-20 11:18:55.536677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.576 [2024-11-20 11:18:55.536723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:12.576 BaseBdev2 00:09:12.576 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.576 11:18:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:12.576 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.576 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.576 [2024-11-20 11:18:55.546058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:12.576 [2024-11-20 11:18:55.548197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:12.576 [2024-11-20 11:18:55.548422] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:12.576 [2024-11-20 11:18:55.548439] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:12.576 [2024-11-20 11:18:55.548763] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:12.576 [2024-11-20 11:18:55.549013] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:12.576 [2024-11-20 11:18:55.549029] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:12.576 [2024-11-20 11:18:55.549251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.576 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.576 11:18:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:12.576 11:18:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:12.576 11:18:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.576 11:18:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.576 11:18:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.576 11:18:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:12.576 11:18:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.576 11:18:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.576 11:18:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.576 11:18:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.576 11:18:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.576 11:18:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.576 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.576 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.576 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.576 11:18:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.576 "name": "raid_bdev1", 00:09:12.576 "uuid": "2421d5a9-5136-4e04-a5d9-646176db452c", 00:09:12.576 "strip_size_kb": 64, 00:09:12.576 "state": "online", 00:09:12.576 "raid_level": "concat", 00:09:12.576 "superblock": true, 00:09:12.576 "num_base_bdevs": 2, 00:09:12.576 "num_base_bdevs_discovered": 2, 00:09:12.576 "num_base_bdevs_operational": 2, 00:09:12.576 "base_bdevs_list": [ 00:09:12.576 { 00:09:12.576 "name": "BaseBdev1", 00:09:12.576 "uuid": "02a3dca8-9bae-5258-89b0-e2b76997db59", 00:09:12.576 "is_configured": true, 00:09:12.576 "data_offset": 2048, 00:09:12.576 "data_size": 63488 00:09:12.576 }, 00:09:12.576 { 00:09:12.576 "name": "BaseBdev2", 00:09:12.576 "uuid": "9380f29e-003c-5f88-b5af-9be09f31bbf1", 00:09:12.576 "is_configured": true, 00:09:12.576 "data_offset": 2048, 00:09:12.576 "data_size": 63488 00:09:12.576 } 00:09:12.576 ] 00:09:12.576 }' 00:09:12.576 11:18:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.576 11:18:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.144 11:18:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:13.144 11:18:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:13.144 [2024-11-20 11:18:56.134585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:14.082 11:18:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:14.082 11:18:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.082 11:18:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.082 11:18:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.082 11:18:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:14.082 11:18:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:14.082 11:18:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:14.082 11:18:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:14.082 11:18:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.082 11:18:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.082 11:18:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.082 11:18:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.082 11:18:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:14.082 11:18:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.082 11:18:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.082 11:18:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.082 11:18:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.082 11:18:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.082 11:18:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.082 11:18:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.082 11:18:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.082 11:18:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.082 11:18:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.082 "name": "raid_bdev1", 00:09:14.082 "uuid": "2421d5a9-5136-4e04-a5d9-646176db452c", 00:09:14.082 "strip_size_kb": 64, 00:09:14.082 "state": "online", 00:09:14.082 "raid_level": "concat", 00:09:14.082 "superblock": true, 00:09:14.082 "num_base_bdevs": 2, 00:09:14.082 "num_base_bdevs_discovered": 2, 00:09:14.082 "num_base_bdevs_operational": 2, 00:09:14.082 "base_bdevs_list": [ 00:09:14.082 { 00:09:14.082 "name": "BaseBdev1", 00:09:14.082 "uuid": "02a3dca8-9bae-5258-89b0-e2b76997db59", 00:09:14.082 "is_configured": true, 00:09:14.082 "data_offset": 2048, 00:09:14.082 "data_size": 63488 00:09:14.083 }, 00:09:14.083 { 00:09:14.083 "name": "BaseBdev2", 00:09:14.083 "uuid": "9380f29e-003c-5f88-b5af-9be09f31bbf1", 00:09:14.083 "is_configured": true, 00:09:14.083 "data_offset": 2048, 00:09:14.083 "data_size": 63488 00:09:14.083 } 00:09:14.083 ] 00:09:14.083 }' 00:09:14.083 11:18:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.083 11:18:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.650 11:18:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:14.650 11:18:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.650 11:18:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.650 [2024-11-20 11:18:57.491352] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:14.650 [2024-11-20 11:18:57.491485] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:14.650 [2024-11-20 11:18:57.494765] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:14.650 [2024-11-20 11:18:57.494868] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.651 [2024-11-20 11:18:57.494938] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:14.651 [2024-11-20 11:18:57.494998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:14.651 { 00:09:14.651 "results": [ 00:09:14.651 { 00:09:14.651 "job": "raid_bdev1", 00:09:14.651 "core_mask": "0x1", 00:09:14.651 "workload": "randrw", 00:09:14.651 "percentage": 50, 00:09:14.651 "status": "finished", 00:09:14.651 "queue_depth": 1, 00:09:14.651 "io_size": 131072, 00:09:14.651 "runtime": 1.357355, 00:09:14.651 "iops": 13729.643313650446, 00:09:14.651 "mibps": 1716.2054142063057, 00:09:14.651 "io_failed": 1, 00:09:14.651 "io_timeout": 0, 00:09:14.651 "avg_latency_us": 101.08123685967225, 00:09:14.651 "min_latency_us": 27.83580786026201, 00:09:14.651 "max_latency_us": 1681.3275109170306 00:09:14.651 } 00:09:14.651 ], 00:09:14.651 "core_count": 1 00:09:14.651 } 00:09:14.651 11:18:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.651 11:18:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62520 00:09:14.651 11:18:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62520 ']' 00:09:14.651 11:18:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62520 00:09:14.651 11:18:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:14.651 11:18:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.651 11:18:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62520 00:09:14.651 11:18:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:14.651 11:18:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:14.651 11:18:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62520' 00:09:14.651 killing process with pid 62520 00:09:14.651 11:18:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62520 00:09:14.651 [2024-11-20 11:18:57.544443] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:14.651 11:18:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62520 00:09:14.651 [2024-11-20 11:18:57.706423] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:16.031 11:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HgjqA7RRDU 00:09:16.031 11:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:16.031 11:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:16.031 11:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:16.031 11:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:16.031 11:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:16.031 11:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:16.031 11:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:16.031 00:09:16.031 real 0m4.660s 00:09:16.031 user 0m5.624s 00:09:16.031 sys 0m0.577s 00:09:16.031 ************************************ 00:09:16.031 END TEST raid_read_error_test 00:09:16.031 ************************************ 00:09:16.031 11:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.031 11:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.031 11:18:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:09:16.031 11:18:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:16.031 11:18:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.031 11:18:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:16.031 ************************************ 00:09:16.031 START TEST raid_write_error_test 00:09:16.031 ************************************ 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.V4nckcB0hZ 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62666 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62666 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62666 ']' 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.031 11:18:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.292 [2024-11-20 11:18:59.207660] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:09:16.292 [2024-11-20 11:18:59.207902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62666 ] 00:09:16.292 [2024-11-20 11:18:59.382697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.551 [2024-11-20 11:18:59.505712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.811 [2024-11-20 11:18:59.737419] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.811 [2024-11-20 11:18:59.737601] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.069 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.069 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:17.070 11:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:17.070 11:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:17.070 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.070 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.070 BaseBdev1_malloc 00:09:17.070 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.070 11:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:17.070 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.070 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.070 true 00:09:17.070 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.070 11:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:17.070 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.070 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.331 [2024-11-20 11:19:00.185721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:17.331 [2024-11-20 11:19:00.185812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.331 [2024-11-20 11:19:00.185841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:17.331 [2024-11-20 11:19:00.185853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.331 [2024-11-20 11:19:00.188270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.331 [2024-11-20 11:19:00.188323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:17.331 BaseBdev1 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.331 BaseBdev2_malloc 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.331 true 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.331 [2024-11-20 11:19:00.253426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:17.331 [2024-11-20 11:19:00.253605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.331 [2024-11-20 11:19:00.253637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:17.331 [2024-11-20 11:19:00.253649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.331 [2024-11-20 11:19:00.256178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.331 [2024-11-20 11:19:00.256226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:17.331 BaseBdev2 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.331 [2024-11-20 11:19:00.265499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:17.331 [2024-11-20 11:19:00.267596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.331 [2024-11-20 11:19:00.267834] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:17.331 [2024-11-20 11:19:00.267853] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:17.331 [2024-11-20 11:19:00.268170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:17.331 [2024-11-20 11:19:00.268372] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:17.331 [2024-11-20 11:19:00.268386] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:17.331 [2024-11-20 11:19:00.268598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.331 "name": "raid_bdev1", 00:09:17.331 "uuid": "b7c106c0-01cb-4dd8-af3a-387b22e233da", 00:09:17.331 "strip_size_kb": 64, 00:09:17.331 "state": "online", 00:09:17.331 "raid_level": "concat", 00:09:17.331 "superblock": true, 00:09:17.331 "num_base_bdevs": 2, 00:09:17.331 "num_base_bdevs_discovered": 2, 00:09:17.331 "num_base_bdevs_operational": 2, 00:09:17.331 "base_bdevs_list": [ 00:09:17.331 { 00:09:17.331 "name": "BaseBdev1", 00:09:17.331 "uuid": "c5afcf1b-7b26-566c-9133-62c27529d2e3", 00:09:17.331 "is_configured": true, 00:09:17.331 "data_offset": 2048, 00:09:17.331 "data_size": 63488 00:09:17.331 }, 00:09:17.331 { 00:09:17.331 "name": "BaseBdev2", 00:09:17.331 "uuid": "a34e6615-f7bf-5235-9828-2ab2112807df", 00:09:17.331 "is_configured": true, 00:09:17.331 "data_offset": 2048, 00:09:17.331 "data_size": 63488 00:09:17.331 } 00:09:17.331 ] 00:09:17.331 }' 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.331 11:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.905 11:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:17.905 11:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:17.905 [2024-11-20 11:19:00.854013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:18.840 11:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:18.840 11:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.840 11:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.840 11:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.840 11:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:18.840 11:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:18.840 11:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:18.840 11:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:18.840 11:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.840 11:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.840 11:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.840 11:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.840 11:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:18.840 11:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.840 11:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.840 11:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.840 11:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.840 11:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.840 11:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.840 11:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.840 11:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.840 11:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.840 11:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.840 "name": "raid_bdev1", 00:09:18.840 "uuid": "b7c106c0-01cb-4dd8-af3a-387b22e233da", 00:09:18.840 "strip_size_kb": 64, 00:09:18.840 "state": "online", 00:09:18.840 "raid_level": "concat", 00:09:18.840 "superblock": true, 00:09:18.840 "num_base_bdevs": 2, 00:09:18.840 "num_base_bdevs_discovered": 2, 00:09:18.840 "num_base_bdevs_operational": 2, 00:09:18.840 "base_bdevs_list": [ 00:09:18.841 { 00:09:18.841 "name": "BaseBdev1", 00:09:18.841 "uuid": "c5afcf1b-7b26-566c-9133-62c27529d2e3", 00:09:18.841 "is_configured": true, 00:09:18.841 "data_offset": 2048, 00:09:18.841 "data_size": 63488 00:09:18.841 }, 00:09:18.841 { 00:09:18.841 "name": "BaseBdev2", 00:09:18.841 "uuid": "a34e6615-f7bf-5235-9828-2ab2112807df", 00:09:18.841 "is_configured": true, 00:09:18.841 "data_offset": 2048, 00:09:18.841 "data_size": 63488 00:09:18.841 } 00:09:18.841 ] 00:09:18.841 }' 00:09:18.841 11:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.841 11:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.099 11:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:19.099 11:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.099 11:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.099 [2024-11-20 11:19:02.194566] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:19.099 [2024-11-20 11:19:02.194607] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.099 [2024-11-20 11:19:02.197403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.099 [2024-11-20 11:19:02.197481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.099 [2024-11-20 11:19:02.197523] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.099 [2024-11-20 11:19:02.197538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:19.099 { 00:09:19.099 "results": [ 00:09:19.099 { 00:09:19.099 "job": "raid_bdev1", 00:09:19.099 "core_mask": "0x1", 00:09:19.099 "workload": "randrw", 00:09:19.099 "percentage": 50, 00:09:19.099 "status": "finished", 00:09:19.099 "queue_depth": 1, 00:09:19.099 "io_size": 131072, 00:09:19.099 "runtime": 1.341006, 00:09:19.099 "iops": 14640.501235639513, 00:09:19.099 "mibps": 1830.062654454939, 00:09:19.099 "io_failed": 1, 00:09:19.099 "io_timeout": 0, 00:09:19.099 "avg_latency_us": 94.8180987174463, 00:09:19.099 "min_latency_us": 27.388646288209607, 00:09:19.099 "max_latency_us": 1552.5449781659388 00:09:19.099 } 00:09:19.099 ], 00:09:19.099 "core_count": 1 00:09:19.099 } 00:09:19.099 11:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.099 11:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62666 00:09:19.099 11:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62666 ']' 00:09:19.099 11:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62666 00:09:19.099 11:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:19.099 11:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.099 11:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62666 00:09:19.358 killing process with pid 62666 00:09:19.358 11:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.358 11:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.358 11:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62666' 00:09:19.358 11:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62666 00:09:19.358 11:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62666 00:09:19.358 [2024-11-20 11:19:02.244036] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:19.358 [2024-11-20 11:19:02.395897] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:20.735 11:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.V4nckcB0hZ 00:09:20.735 11:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:20.735 11:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:20.735 11:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:09:20.735 11:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:20.735 11:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:20.735 11:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:20.735 11:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:09:20.735 00:09:20.735 real 0m4.561s 00:09:20.735 user 0m5.513s 00:09:20.735 sys 0m0.572s 00:09:20.735 11:19:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.735 11:19:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.735 ************************************ 00:09:20.735 END TEST raid_write_error_test 00:09:20.735 ************************************ 00:09:20.735 11:19:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:20.735 11:19:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:09:20.735 11:19:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:20.735 11:19:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.735 11:19:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:20.735 ************************************ 00:09:20.735 START TEST raid_state_function_test 00:09:20.735 ************************************ 00:09:20.735 11:19:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:09:20.735 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62806 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62806' 00:09:20.736 Process raid pid: 62806 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62806 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62806 ']' 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.736 11:19:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.736 [2024-11-20 11:19:03.824451] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:09:20.736 [2024-11-20 11:19:03.824608] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.994 [2024-11-20 11:19:03.986611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.254 [2024-11-20 11:19:04.112716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.254 [2024-11-20 11:19:04.330393] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.254 [2024-11-20 11:19:04.330444] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.823 11:19:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.823 11:19:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:21.823 11:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:21.823 11:19:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.823 11:19:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.823 [2024-11-20 11:19:04.727691] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:21.823 [2024-11-20 11:19:04.727846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:21.823 [2024-11-20 11:19:04.727869] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:21.823 [2024-11-20 11:19:04.727886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:21.823 11:19:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.823 11:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:21.823 11:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.823 11:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.823 11:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.823 11:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.823 11:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:21.823 11:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.823 11:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.823 11:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.823 11:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.823 11:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.823 11:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.823 11:19:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.823 11:19:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.823 11:19:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.823 11:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.823 "name": "Existed_Raid", 00:09:21.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.823 "strip_size_kb": 0, 00:09:21.823 "state": "configuring", 00:09:21.823 "raid_level": "raid1", 00:09:21.823 "superblock": false, 00:09:21.823 "num_base_bdevs": 2, 00:09:21.823 "num_base_bdevs_discovered": 0, 00:09:21.823 "num_base_bdevs_operational": 2, 00:09:21.823 "base_bdevs_list": [ 00:09:21.823 { 00:09:21.823 "name": "BaseBdev1", 00:09:21.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.823 "is_configured": false, 00:09:21.823 "data_offset": 0, 00:09:21.823 "data_size": 0 00:09:21.823 }, 00:09:21.823 { 00:09:21.823 "name": "BaseBdev2", 00:09:21.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.823 "is_configured": false, 00:09:21.823 "data_offset": 0, 00:09:21.823 "data_size": 0 00:09:21.823 } 00:09:21.823 ] 00:09:21.823 }' 00:09:21.823 11:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.823 11:19:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.391 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:22.391 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.391 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.391 [2024-11-20 11:19:05.230977] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:22.391 [2024-11-20 11:19:05.231113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:22.391 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.391 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:22.391 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.391 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.391 [2024-11-20 11:19:05.242938] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:22.391 [2024-11-20 11:19:05.243061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:22.391 [2024-11-20 11:19:05.243112] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:22.391 [2024-11-20 11:19:05.243148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:22.391 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.391 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:22.391 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.391 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.391 [2024-11-20 11:19:05.295387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.391 BaseBdev1 00:09:22.391 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.391 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:22.391 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:22.391 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.392 [ 00:09:22.392 { 00:09:22.392 "name": "BaseBdev1", 00:09:22.392 "aliases": [ 00:09:22.392 "30b6be94-8f73-43db-bd97-d2bccc73a9c7" 00:09:22.392 ], 00:09:22.392 "product_name": "Malloc disk", 00:09:22.392 "block_size": 512, 00:09:22.392 "num_blocks": 65536, 00:09:22.392 "uuid": "30b6be94-8f73-43db-bd97-d2bccc73a9c7", 00:09:22.392 "assigned_rate_limits": { 00:09:22.392 "rw_ios_per_sec": 0, 00:09:22.392 "rw_mbytes_per_sec": 0, 00:09:22.392 "r_mbytes_per_sec": 0, 00:09:22.392 "w_mbytes_per_sec": 0 00:09:22.392 }, 00:09:22.392 "claimed": true, 00:09:22.392 "claim_type": "exclusive_write", 00:09:22.392 "zoned": false, 00:09:22.392 "supported_io_types": { 00:09:22.392 "read": true, 00:09:22.392 "write": true, 00:09:22.392 "unmap": true, 00:09:22.392 "flush": true, 00:09:22.392 "reset": true, 00:09:22.392 "nvme_admin": false, 00:09:22.392 "nvme_io": false, 00:09:22.392 "nvme_io_md": false, 00:09:22.392 "write_zeroes": true, 00:09:22.392 "zcopy": true, 00:09:22.392 "get_zone_info": false, 00:09:22.392 "zone_management": false, 00:09:22.392 "zone_append": false, 00:09:22.392 "compare": false, 00:09:22.392 "compare_and_write": false, 00:09:22.392 "abort": true, 00:09:22.392 "seek_hole": false, 00:09:22.392 "seek_data": false, 00:09:22.392 "copy": true, 00:09:22.392 "nvme_iov_md": false 00:09:22.392 }, 00:09:22.392 "memory_domains": [ 00:09:22.392 { 00:09:22.392 "dma_device_id": "system", 00:09:22.392 "dma_device_type": 1 00:09:22.392 }, 00:09:22.392 { 00:09:22.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.392 "dma_device_type": 2 00:09:22.392 } 00:09:22.392 ], 00:09:22.392 "driver_specific": {} 00:09:22.392 } 00:09:22.392 ] 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.392 "name": "Existed_Raid", 00:09:22.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.392 "strip_size_kb": 0, 00:09:22.392 "state": "configuring", 00:09:22.392 "raid_level": "raid1", 00:09:22.392 "superblock": false, 00:09:22.392 "num_base_bdevs": 2, 00:09:22.392 "num_base_bdevs_discovered": 1, 00:09:22.392 "num_base_bdevs_operational": 2, 00:09:22.392 "base_bdevs_list": [ 00:09:22.392 { 00:09:22.392 "name": "BaseBdev1", 00:09:22.392 "uuid": "30b6be94-8f73-43db-bd97-d2bccc73a9c7", 00:09:22.392 "is_configured": true, 00:09:22.392 "data_offset": 0, 00:09:22.392 "data_size": 65536 00:09:22.392 }, 00:09:22.392 { 00:09:22.392 "name": "BaseBdev2", 00:09:22.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.392 "is_configured": false, 00:09:22.392 "data_offset": 0, 00:09:22.392 "data_size": 0 00:09:22.392 } 00:09:22.392 ] 00:09:22.392 }' 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.392 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.960 [2024-11-20 11:19:05.802610] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:22.960 [2024-11-20 11:19:05.802750] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.960 [2024-11-20 11:19:05.814652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.960 [2024-11-20 11:19:05.816601] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:22.960 [2024-11-20 11:19:05.816724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.960 "name": "Existed_Raid", 00:09:22.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.960 "strip_size_kb": 0, 00:09:22.960 "state": "configuring", 00:09:22.960 "raid_level": "raid1", 00:09:22.960 "superblock": false, 00:09:22.960 "num_base_bdevs": 2, 00:09:22.960 "num_base_bdevs_discovered": 1, 00:09:22.960 "num_base_bdevs_operational": 2, 00:09:22.960 "base_bdevs_list": [ 00:09:22.960 { 00:09:22.960 "name": "BaseBdev1", 00:09:22.960 "uuid": "30b6be94-8f73-43db-bd97-d2bccc73a9c7", 00:09:22.960 "is_configured": true, 00:09:22.960 "data_offset": 0, 00:09:22.960 "data_size": 65536 00:09:22.960 }, 00:09:22.960 { 00:09:22.960 "name": "BaseBdev2", 00:09:22.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.960 "is_configured": false, 00:09:22.960 "data_offset": 0, 00:09:22.960 "data_size": 0 00:09:22.960 } 00:09:22.960 ] 00:09:22.960 }' 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.960 11:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.220 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:23.220 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.220 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.220 [2024-11-20 11:19:06.327934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:23.220 [2024-11-20 11:19:06.327993] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:23.220 [2024-11-20 11:19:06.328001] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:23.220 [2024-11-20 11:19:06.328260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:23.220 [2024-11-20 11:19:06.328420] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:23.220 [2024-11-20 11:19:06.328435] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:23.220 [2024-11-20 11:19:06.328788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.220 BaseBdev2 00:09:23.220 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.220 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:23.220 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:23.220 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.220 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:23.220 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.220 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.220 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.220 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.220 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.479 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.479 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:23.479 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.479 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.479 [ 00:09:23.479 { 00:09:23.479 "name": "BaseBdev2", 00:09:23.479 "aliases": [ 00:09:23.479 "6cc5e826-c4a8-459a-86ee-c55ccca2f638" 00:09:23.479 ], 00:09:23.479 "product_name": "Malloc disk", 00:09:23.479 "block_size": 512, 00:09:23.479 "num_blocks": 65536, 00:09:23.479 "uuid": "6cc5e826-c4a8-459a-86ee-c55ccca2f638", 00:09:23.479 "assigned_rate_limits": { 00:09:23.479 "rw_ios_per_sec": 0, 00:09:23.479 "rw_mbytes_per_sec": 0, 00:09:23.479 "r_mbytes_per_sec": 0, 00:09:23.479 "w_mbytes_per_sec": 0 00:09:23.479 }, 00:09:23.479 "claimed": true, 00:09:23.479 "claim_type": "exclusive_write", 00:09:23.479 "zoned": false, 00:09:23.479 "supported_io_types": { 00:09:23.479 "read": true, 00:09:23.479 "write": true, 00:09:23.479 "unmap": true, 00:09:23.479 "flush": true, 00:09:23.479 "reset": true, 00:09:23.479 "nvme_admin": false, 00:09:23.479 "nvme_io": false, 00:09:23.479 "nvme_io_md": false, 00:09:23.479 "write_zeroes": true, 00:09:23.479 "zcopy": true, 00:09:23.479 "get_zone_info": false, 00:09:23.479 "zone_management": false, 00:09:23.479 "zone_append": false, 00:09:23.479 "compare": false, 00:09:23.479 "compare_and_write": false, 00:09:23.479 "abort": true, 00:09:23.479 "seek_hole": false, 00:09:23.479 "seek_data": false, 00:09:23.479 "copy": true, 00:09:23.479 "nvme_iov_md": false 00:09:23.479 }, 00:09:23.479 "memory_domains": [ 00:09:23.479 { 00:09:23.479 "dma_device_id": "system", 00:09:23.479 "dma_device_type": 1 00:09:23.479 }, 00:09:23.479 { 00:09:23.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.479 "dma_device_type": 2 00:09:23.479 } 00:09:23.479 ], 00:09:23.479 "driver_specific": {} 00:09:23.479 } 00:09:23.479 ] 00:09:23.479 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.479 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:23.479 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:23.479 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:23.479 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:23.479 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.479 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.479 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.479 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.479 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:23.479 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.479 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.479 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.479 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.479 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.479 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.479 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.479 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.479 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.479 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.479 "name": "Existed_Raid", 00:09:23.480 "uuid": "763a0a1a-0268-40cb-b495-7849f679b518", 00:09:23.480 "strip_size_kb": 0, 00:09:23.480 "state": "online", 00:09:23.480 "raid_level": "raid1", 00:09:23.480 "superblock": false, 00:09:23.480 "num_base_bdevs": 2, 00:09:23.480 "num_base_bdevs_discovered": 2, 00:09:23.480 "num_base_bdevs_operational": 2, 00:09:23.480 "base_bdevs_list": [ 00:09:23.480 { 00:09:23.480 "name": "BaseBdev1", 00:09:23.480 "uuid": "30b6be94-8f73-43db-bd97-d2bccc73a9c7", 00:09:23.480 "is_configured": true, 00:09:23.480 "data_offset": 0, 00:09:23.480 "data_size": 65536 00:09:23.480 }, 00:09:23.480 { 00:09:23.480 "name": "BaseBdev2", 00:09:23.480 "uuid": "6cc5e826-c4a8-459a-86ee-c55ccca2f638", 00:09:23.480 "is_configured": true, 00:09:23.480 "data_offset": 0, 00:09:23.480 "data_size": 65536 00:09:23.480 } 00:09:23.480 ] 00:09:23.480 }' 00:09:23.480 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.480 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.740 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:23.740 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:23.740 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:23.740 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:23.740 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:23.740 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:23.740 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:23.740 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:23.740 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.740 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.740 [2024-11-20 11:19:06.807573] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:23.740 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.740 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:23.740 "name": "Existed_Raid", 00:09:23.740 "aliases": [ 00:09:23.740 "763a0a1a-0268-40cb-b495-7849f679b518" 00:09:23.740 ], 00:09:23.740 "product_name": "Raid Volume", 00:09:23.740 "block_size": 512, 00:09:23.740 "num_blocks": 65536, 00:09:23.740 "uuid": "763a0a1a-0268-40cb-b495-7849f679b518", 00:09:23.740 "assigned_rate_limits": { 00:09:23.740 "rw_ios_per_sec": 0, 00:09:23.740 "rw_mbytes_per_sec": 0, 00:09:23.740 "r_mbytes_per_sec": 0, 00:09:23.740 "w_mbytes_per_sec": 0 00:09:23.740 }, 00:09:23.740 "claimed": false, 00:09:23.740 "zoned": false, 00:09:23.740 "supported_io_types": { 00:09:23.740 "read": true, 00:09:23.740 "write": true, 00:09:23.740 "unmap": false, 00:09:23.740 "flush": false, 00:09:23.740 "reset": true, 00:09:23.740 "nvme_admin": false, 00:09:23.740 "nvme_io": false, 00:09:23.740 "nvme_io_md": false, 00:09:23.740 "write_zeroes": true, 00:09:23.740 "zcopy": false, 00:09:23.740 "get_zone_info": false, 00:09:23.740 "zone_management": false, 00:09:23.740 "zone_append": false, 00:09:23.740 "compare": false, 00:09:23.740 "compare_and_write": false, 00:09:23.740 "abort": false, 00:09:23.740 "seek_hole": false, 00:09:23.740 "seek_data": false, 00:09:23.740 "copy": false, 00:09:23.740 "nvme_iov_md": false 00:09:23.740 }, 00:09:23.740 "memory_domains": [ 00:09:23.740 { 00:09:23.740 "dma_device_id": "system", 00:09:23.740 "dma_device_type": 1 00:09:23.740 }, 00:09:23.740 { 00:09:23.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.740 "dma_device_type": 2 00:09:23.740 }, 00:09:23.740 { 00:09:23.740 "dma_device_id": "system", 00:09:23.740 "dma_device_type": 1 00:09:23.740 }, 00:09:23.740 { 00:09:23.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.740 "dma_device_type": 2 00:09:23.740 } 00:09:23.740 ], 00:09:23.740 "driver_specific": { 00:09:23.740 "raid": { 00:09:23.740 "uuid": "763a0a1a-0268-40cb-b495-7849f679b518", 00:09:23.740 "strip_size_kb": 0, 00:09:23.740 "state": "online", 00:09:23.740 "raid_level": "raid1", 00:09:23.740 "superblock": false, 00:09:23.740 "num_base_bdevs": 2, 00:09:23.740 "num_base_bdevs_discovered": 2, 00:09:23.740 "num_base_bdevs_operational": 2, 00:09:23.740 "base_bdevs_list": [ 00:09:23.740 { 00:09:23.740 "name": "BaseBdev1", 00:09:23.740 "uuid": "30b6be94-8f73-43db-bd97-d2bccc73a9c7", 00:09:23.740 "is_configured": true, 00:09:23.740 "data_offset": 0, 00:09:23.740 "data_size": 65536 00:09:23.740 }, 00:09:23.740 { 00:09:23.740 "name": "BaseBdev2", 00:09:23.740 "uuid": "6cc5e826-c4a8-459a-86ee-c55ccca2f638", 00:09:23.740 "is_configured": true, 00:09:23.740 "data_offset": 0, 00:09:23.740 "data_size": 65536 00:09:23.740 } 00:09:23.740 ] 00:09:23.740 } 00:09:23.740 } 00:09:23.740 }' 00:09:23.740 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:23.998 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:23.998 BaseBdev2' 00:09:23.998 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.998 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:23.998 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.998 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:23.998 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.998 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.998 11:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.998 11:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.998 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.998 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.998 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.998 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:23.998 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.998 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.998 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.998 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.998 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.998 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.998 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:23.998 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.998 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.998 [2024-11-20 11:19:07.058878] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:24.257 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.257 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:24.257 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:24.257 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:24.257 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:24.257 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:24.258 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:24.258 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.258 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.258 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.258 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.258 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:24.258 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.258 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.258 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.258 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.258 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.258 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.258 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.258 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.258 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.258 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.258 "name": "Existed_Raid", 00:09:24.258 "uuid": "763a0a1a-0268-40cb-b495-7849f679b518", 00:09:24.258 "strip_size_kb": 0, 00:09:24.258 "state": "online", 00:09:24.258 "raid_level": "raid1", 00:09:24.258 "superblock": false, 00:09:24.258 "num_base_bdevs": 2, 00:09:24.258 "num_base_bdevs_discovered": 1, 00:09:24.258 "num_base_bdevs_operational": 1, 00:09:24.258 "base_bdevs_list": [ 00:09:24.258 { 00:09:24.258 "name": null, 00:09:24.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.258 "is_configured": false, 00:09:24.258 "data_offset": 0, 00:09:24.258 "data_size": 65536 00:09:24.258 }, 00:09:24.258 { 00:09:24.258 "name": "BaseBdev2", 00:09:24.258 "uuid": "6cc5e826-c4a8-459a-86ee-c55ccca2f638", 00:09:24.258 "is_configured": true, 00:09:24.258 "data_offset": 0, 00:09:24.258 "data_size": 65536 00:09:24.258 } 00:09:24.258 ] 00:09:24.258 }' 00:09:24.258 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.258 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.515 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:24.515 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.773 [2024-11-20 11:19:07.687524] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:24.773 [2024-11-20 11:19:07.687640] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:24.773 [2024-11-20 11:19:07.789697] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:24.773 [2024-11-20 11:19:07.789858] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:24.773 [2024-11-20 11:19:07.789884] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62806 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62806 ']' 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62806 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62806 00:09:24.773 killing process with pid 62806 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62806' 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62806 00:09:24.773 [2024-11-20 11:19:07.884072] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:24.773 11:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62806 00:09:25.032 [2024-11-20 11:19:07.903366] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:26.012 11:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:26.012 00:09:26.012 real 0m5.354s 00:09:26.012 user 0m7.772s 00:09:26.012 sys 0m0.859s 00:09:26.012 11:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.012 11:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.012 ************************************ 00:09:26.012 END TEST raid_state_function_test 00:09:26.012 ************************************ 00:09:26.272 11:19:09 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:09:26.272 11:19:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:26.272 11:19:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.272 11:19:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:26.272 ************************************ 00:09:26.272 START TEST raid_state_function_test_sb 00:09:26.272 ************************************ 00:09:26.272 11:19:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:09:26.272 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:26.272 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:26.272 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:26.272 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:26.272 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:26.272 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.272 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:26.272 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:26.272 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.272 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:26.272 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:26.272 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.272 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:26.273 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:26.273 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:26.273 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:26.273 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:26.273 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:26.273 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:26.273 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:26.273 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:26.273 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:26.273 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63059 00:09:26.273 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:26.273 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63059' 00:09:26.273 Process raid pid: 63059 00:09:26.273 11:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63059 00:09:26.273 11:19:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63059 ']' 00:09:26.273 11:19:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.273 11:19:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.273 11:19:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.273 11:19:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.273 11:19:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.273 [2024-11-20 11:19:09.242061] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:09:26.273 [2024-11-20 11:19:09.242246] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.531 [2024-11-20 11:19:09.417164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.531 [2024-11-20 11:19:09.538119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.791 [2024-11-20 11:19:09.741256] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.791 [2024-11-20 11:19:09.741303] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.050 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.050 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:27.050 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:27.050 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.050 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.050 [2024-11-20 11:19:10.134763] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:27.050 [2024-11-20 11:19:10.134822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:27.050 [2024-11-20 11:19:10.134833] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:27.050 [2024-11-20 11:19:10.134843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:27.050 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.050 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:27.050 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.050 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.050 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.050 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.050 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:27.050 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.050 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.050 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.050 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.050 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.050 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.050 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.050 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.050 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.310 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.310 "name": "Existed_Raid", 00:09:27.310 "uuid": "8ca4b759-6002-4fd1-bf72-fa5df045cc03", 00:09:27.310 "strip_size_kb": 0, 00:09:27.310 "state": "configuring", 00:09:27.310 "raid_level": "raid1", 00:09:27.310 "superblock": true, 00:09:27.310 "num_base_bdevs": 2, 00:09:27.310 "num_base_bdevs_discovered": 0, 00:09:27.310 "num_base_bdevs_operational": 2, 00:09:27.310 "base_bdevs_list": [ 00:09:27.310 { 00:09:27.310 "name": "BaseBdev1", 00:09:27.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.310 "is_configured": false, 00:09:27.310 "data_offset": 0, 00:09:27.310 "data_size": 0 00:09:27.310 }, 00:09:27.310 { 00:09:27.310 "name": "BaseBdev2", 00:09:27.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.310 "is_configured": false, 00:09:27.310 "data_offset": 0, 00:09:27.310 "data_size": 0 00:09:27.310 } 00:09:27.310 ] 00:09:27.310 }' 00:09:27.310 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.310 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.570 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:27.570 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.570 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.570 [2024-11-20 11:19:10.601908] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:27.570 [2024-11-20 11:19:10.602007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:27.570 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.570 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:27.570 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.570 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.570 [2024-11-20 11:19:10.609901] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:27.570 [2024-11-20 11:19:10.609998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:27.570 [2024-11-20 11:19:10.610042] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:27.570 [2024-11-20 11:19:10.610076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:27.570 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.570 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:27.570 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.570 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.570 [2024-11-20 11:19:10.655154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.570 BaseBdev1 00:09:27.570 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.570 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:27.570 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:27.570 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:27.570 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:27.571 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:27.571 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:27.571 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:27.571 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.571 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.571 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.571 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:27.571 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.571 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.571 [ 00:09:27.571 { 00:09:27.571 "name": "BaseBdev1", 00:09:27.571 "aliases": [ 00:09:27.571 "5f0d5695-cdac-452f-9906-5df5d0a69a25" 00:09:27.571 ], 00:09:27.571 "product_name": "Malloc disk", 00:09:27.571 "block_size": 512, 00:09:27.571 "num_blocks": 65536, 00:09:27.571 "uuid": "5f0d5695-cdac-452f-9906-5df5d0a69a25", 00:09:27.571 "assigned_rate_limits": { 00:09:27.571 "rw_ios_per_sec": 0, 00:09:27.571 "rw_mbytes_per_sec": 0, 00:09:27.571 "r_mbytes_per_sec": 0, 00:09:27.571 "w_mbytes_per_sec": 0 00:09:27.571 }, 00:09:27.571 "claimed": true, 00:09:27.571 "claim_type": "exclusive_write", 00:09:27.830 "zoned": false, 00:09:27.830 "supported_io_types": { 00:09:27.830 "read": true, 00:09:27.830 "write": true, 00:09:27.830 "unmap": true, 00:09:27.830 "flush": true, 00:09:27.831 "reset": true, 00:09:27.831 "nvme_admin": false, 00:09:27.831 "nvme_io": false, 00:09:27.831 "nvme_io_md": false, 00:09:27.831 "write_zeroes": true, 00:09:27.831 "zcopy": true, 00:09:27.831 "get_zone_info": false, 00:09:27.831 "zone_management": false, 00:09:27.831 "zone_append": false, 00:09:27.831 "compare": false, 00:09:27.831 "compare_and_write": false, 00:09:27.831 "abort": true, 00:09:27.831 "seek_hole": false, 00:09:27.831 "seek_data": false, 00:09:27.831 "copy": true, 00:09:27.831 "nvme_iov_md": false 00:09:27.831 }, 00:09:27.831 "memory_domains": [ 00:09:27.831 { 00:09:27.831 "dma_device_id": "system", 00:09:27.831 "dma_device_type": 1 00:09:27.831 }, 00:09:27.831 { 00:09:27.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.831 "dma_device_type": 2 00:09:27.831 } 00:09:27.831 ], 00:09:27.831 "driver_specific": {} 00:09:27.831 } 00:09:27.831 ] 00:09:27.831 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.831 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:27.831 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:27.831 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.831 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.831 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.831 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.831 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:27.831 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.831 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.831 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.831 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.831 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.831 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.831 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.831 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.831 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.831 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.831 "name": "Existed_Raid", 00:09:27.831 "uuid": "a0c1bb4d-c893-4cb3-9107-b8f98ebb693a", 00:09:27.831 "strip_size_kb": 0, 00:09:27.831 "state": "configuring", 00:09:27.831 "raid_level": "raid1", 00:09:27.831 "superblock": true, 00:09:27.831 "num_base_bdevs": 2, 00:09:27.831 "num_base_bdevs_discovered": 1, 00:09:27.831 "num_base_bdevs_operational": 2, 00:09:27.831 "base_bdevs_list": [ 00:09:27.831 { 00:09:27.831 "name": "BaseBdev1", 00:09:27.831 "uuid": "5f0d5695-cdac-452f-9906-5df5d0a69a25", 00:09:27.831 "is_configured": true, 00:09:27.831 "data_offset": 2048, 00:09:27.831 "data_size": 63488 00:09:27.831 }, 00:09:27.831 { 00:09:27.831 "name": "BaseBdev2", 00:09:27.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.831 "is_configured": false, 00:09:27.831 "data_offset": 0, 00:09:27.831 "data_size": 0 00:09:27.831 } 00:09:27.831 ] 00:09:27.831 }' 00:09:27.831 11:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.831 11:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.091 [2024-11-20 11:19:11.142381] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:28.091 [2024-11-20 11:19:11.142437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.091 [2024-11-20 11:19:11.154406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.091 [2024-11-20 11:19:11.156347] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:28.091 [2024-11-20 11:19:11.156461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.091 "name": "Existed_Raid", 00:09:28.091 "uuid": "15f4ffea-d5bc-4959-8643-f3e3ab908306", 00:09:28.091 "strip_size_kb": 0, 00:09:28.091 "state": "configuring", 00:09:28.091 "raid_level": "raid1", 00:09:28.091 "superblock": true, 00:09:28.091 "num_base_bdevs": 2, 00:09:28.091 "num_base_bdevs_discovered": 1, 00:09:28.091 "num_base_bdevs_operational": 2, 00:09:28.091 "base_bdevs_list": [ 00:09:28.091 { 00:09:28.091 "name": "BaseBdev1", 00:09:28.091 "uuid": "5f0d5695-cdac-452f-9906-5df5d0a69a25", 00:09:28.091 "is_configured": true, 00:09:28.091 "data_offset": 2048, 00:09:28.091 "data_size": 63488 00:09:28.091 }, 00:09:28.091 { 00:09:28.091 "name": "BaseBdev2", 00:09:28.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.091 "is_configured": false, 00:09:28.091 "data_offset": 0, 00:09:28.091 "data_size": 0 00:09:28.091 } 00:09:28.091 ] 00:09:28.091 }' 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.091 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.660 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:28.660 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.660 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.660 [2024-11-20 11:19:11.655779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.660 [2024-11-20 11:19:11.656045] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:28.660 [2024-11-20 11:19:11.656061] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:28.661 [2024-11-20 11:19:11.656325] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:28.661 [2024-11-20 11:19:11.656496] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:28.661 BaseBdev2 00:09:28.661 [2024-11-20 11:19:11.656558] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:28.661 [2024-11-20 11:19:11.656741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.661 [ 00:09:28.661 { 00:09:28.661 "name": "BaseBdev2", 00:09:28.661 "aliases": [ 00:09:28.661 "a9665164-a8c3-4c2a-b1e9-1c1ce33b6717" 00:09:28.661 ], 00:09:28.661 "product_name": "Malloc disk", 00:09:28.661 "block_size": 512, 00:09:28.661 "num_blocks": 65536, 00:09:28.661 "uuid": "a9665164-a8c3-4c2a-b1e9-1c1ce33b6717", 00:09:28.661 "assigned_rate_limits": { 00:09:28.661 "rw_ios_per_sec": 0, 00:09:28.661 "rw_mbytes_per_sec": 0, 00:09:28.661 "r_mbytes_per_sec": 0, 00:09:28.661 "w_mbytes_per_sec": 0 00:09:28.661 }, 00:09:28.661 "claimed": true, 00:09:28.661 "claim_type": "exclusive_write", 00:09:28.661 "zoned": false, 00:09:28.661 "supported_io_types": { 00:09:28.661 "read": true, 00:09:28.661 "write": true, 00:09:28.661 "unmap": true, 00:09:28.661 "flush": true, 00:09:28.661 "reset": true, 00:09:28.661 "nvme_admin": false, 00:09:28.661 "nvme_io": false, 00:09:28.661 "nvme_io_md": false, 00:09:28.661 "write_zeroes": true, 00:09:28.661 "zcopy": true, 00:09:28.661 "get_zone_info": false, 00:09:28.661 "zone_management": false, 00:09:28.661 "zone_append": false, 00:09:28.661 "compare": false, 00:09:28.661 "compare_and_write": false, 00:09:28.661 "abort": true, 00:09:28.661 "seek_hole": false, 00:09:28.661 "seek_data": false, 00:09:28.661 "copy": true, 00:09:28.661 "nvme_iov_md": false 00:09:28.661 }, 00:09:28.661 "memory_domains": [ 00:09:28.661 { 00:09:28.661 "dma_device_id": "system", 00:09:28.661 "dma_device_type": 1 00:09:28.661 }, 00:09:28.661 { 00:09:28.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.661 "dma_device_type": 2 00:09:28.661 } 00:09:28.661 ], 00:09:28.661 "driver_specific": {} 00:09:28.661 } 00:09:28.661 ] 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.661 "name": "Existed_Raid", 00:09:28.661 "uuid": "15f4ffea-d5bc-4959-8643-f3e3ab908306", 00:09:28.661 "strip_size_kb": 0, 00:09:28.661 "state": "online", 00:09:28.661 "raid_level": "raid1", 00:09:28.661 "superblock": true, 00:09:28.661 "num_base_bdevs": 2, 00:09:28.661 "num_base_bdevs_discovered": 2, 00:09:28.661 "num_base_bdevs_operational": 2, 00:09:28.661 "base_bdevs_list": [ 00:09:28.661 { 00:09:28.661 "name": "BaseBdev1", 00:09:28.661 "uuid": "5f0d5695-cdac-452f-9906-5df5d0a69a25", 00:09:28.661 "is_configured": true, 00:09:28.661 "data_offset": 2048, 00:09:28.661 "data_size": 63488 00:09:28.661 }, 00:09:28.661 { 00:09:28.661 "name": "BaseBdev2", 00:09:28.661 "uuid": "a9665164-a8c3-4c2a-b1e9-1c1ce33b6717", 00:09:28.661 "is_configured": true, 00:09:28.661 "data_offset": 2048, 00:09:28.661 "data_size": 63488 00:09:28.661 } 00:09:28.661 ] 00:09:28.661 }' 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.661 11:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:29.230 [2024-11-20 11:19:12.151588] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:29.230 "name": "Existed_Raid", 00:09:29.230 "aliases": [ 00:09:29.230 "15f4ffea-d5bc-4959-8643-f3e3ab908306" 00:09:29.230 ], 00:09:29.230 "product_name": "Raid Volume", 00:09:29.230 "block_size": 512, 00:09:29.230 "num_blocks": 63488, 00:09:29.230 "uuid": "15f4ffea-d5bc-4959-8643-f3e3ab908306", 00:09:29.230 "assigned_rate_limits": { 00:09:29.230 "rw_ios_per_sec": 0, 00:09:29.230 "rw_mbytes_per_sec": 0, 00:09:29.230 "r_mbytes_per_sec": 0, 00:09:29.230 "w_mbytes_per_sec": 0 00:09:29.230 }, 00:09:29.230 "claimed": false, 00:09:29.230 "zoned": false, 00:09:29.230 "supported_io_types": { 00:09:29.230 "read": true, 00:09:29.230 "write": true, 00:09:29.230 "unmap": false, 00:09:29.230 "flush": false, 00:09:29.230 "reset": true, 00:09:29.230 "nvme_admin": false, 00:09:29.230 "nvme_io": false, 00:09:29.230 "nvme_io_md": false, 00:09:29.230 "write_zeroes": true, 00:09:29.230 "zcopy": false, 00:09:29.230 "get_zone_info": false, 00:09:29.230 "zone_management": false, 00:09:29.230 "zone_append": false, 00:09:29.230 "compare": false, 00:09:29.230 "compare_and_write": false, 00:09:29.230 "abort": false, 00:09:29.230 "seek_hole": false, 00:09:29.230 "seek_data": false, 00:09:29.230 "copy": false, 00:09:29.230 "nvme_iov_md": false 00:09:29.230 }, 00:09:29.230 "memory_domains": [ 00:09:29.230 { 00:09:29.230 "dma_device_id": "system", 00:09:29.230 "dma_device_type": 1 00:09:29.230 }, 00:09:29.230 { 00:09:29.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.230 "dma_device_type": 2 00:09:29.230 }, 00:09:29.230 { 00:09:29.230 "dma_device_id": "system", 00:09:29.230 "dma_device_type": 1 00:09:29.230 }, 00:09:29.230 { 00:09:29.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.230 "dma_device_type": 2 00:09:29.230 } 00:09:29.230 ], 00:09:29.230 "driver_specific": { 00:09:29.230 "raid": { 00:09:29.230 "uuid": "15f4ffea-d5bc-4959-8643-f3e3ab908306", 00:09:29.230 "strip_size_kb": 0, 00:09:29.230 "state": "online", 00:09:29.230 "raid_level": "raid1", 00:09:29.230 "superblock": true, 00:09:29.230 "num_base_bdevs": 2, 00:09:29.230 "num_base_bdevs_discovered": 2, 00:09:29.230 "num_base_bdevs_operational": 2, 00:09:29.230 "base_bdevs_list": [ 00:09:29.230 { 00:09:29.230 "name": "BaseBdev1", 00:09:29.230 "uuid": "5f0d5695-cdac-452f-9906-5df5d0a69a25", 00:09:29.230 "is_configured": true, 00:09:29.230 "data_offset": 2048, 00:09:29.230 "data_size": 63488 00:09:29.230 }, 00:09:29.230 { 00:09:29.230 "name": "BaseBdev2", 00:09:29.230 "uuid": "a9665164-a8c3-4c2a-b1e9-1c1ce33b6717", 00:09:29.230 "is_configured": true, 00:09:29.230 "data_offset": 2048, 00:09:29.230 "data_size": 63488 00:09:29.230 } 00:09:29.230 ] 00:09:29.230 } 00:09:29.230 } 00:09:29.230 }' 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:29.230 BaseBdev2' 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.230 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.489 [2024-11-20 11:19:12.382884] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.489 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.489 "name": "Existed_Raid", 00:09:29.489 "uuid": "15f4ffea-d5bc-4959-8643-f3e3ab908306", 00:09:29.490 "strip_size_kb": 0, 00:09:29.490 "state": "online", 00:09:29.490 "raid_level": "raid1", 00:09:29.490 "superblock": true, 00:09:29.490 "num_base_bdevs": 2, 00:09:29.490 "num_base_bdevs_discovered": 1, 00:09:29.490 "num_base_bdevs_operational": 1, 00:09:29.490 "base_bdevs_list": [ 00:09:29.490 { 00:09:29.490 "name": null, 00:09:29.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.490 "is_configured": false, 00:09:29.490 "data_offset": 0, 00:09:29.490 "data_size": 63488 00:09:29.490 }, 00:09:29.490 { 00:09:29.490 "name": "BaseBdev2", 00:09:29.490 "uuid": "a9665164-a8c3-4c2a-b1e9-1c1ce33b6717", 00:09:29.490 "is_configured": true, 00:09:29.490 "data_offset": 2048, 00:09:29.490 "data_size": 63488 00:09:29.490 } 00:09:29.490 ] 00:09:29.490 }' 00:09:29.490 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.490 11:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.057 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:30.057 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:30.057 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:30.057 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.057 11:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.057 11:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.057 11:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.057 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:30.057 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:30.057 11:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:30.057 11:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.057 11:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.057 [2024-11-20 11:19:12.979778] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:30.057 [2024-11-20 11:19:12.979962] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.057 [2024-11-20 11:19:13.081474] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.057 [2024-11-20 11:19:13.081635] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.057 [2024-11-20 11:19:13.081680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:30.057 11:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.057 11:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:30.057 11:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:30.057 11:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.057 11:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:30.057 11:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.057 11:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.057 11:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.057 11:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:30.057 11:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:30.057 11:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:30.057 11:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63059 00:09:30.057 11:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63059 ']' 00:09:30.057 11:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63059 00:09:30.057 11:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:30.057 11:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.057 11:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63059 00:09:30.317 killing process with pid 63059 00:09:30.317 11:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.317 11:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.317 11:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63059' 00:09:30.317 11:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63059 00:09:30.317 [2024-11-20 11:19:13.181706] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:30.317 11:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63059 00:09:30.317 [2024-11-20 11:19:13.199036] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:31.255 11:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:31.255 00:09:31.255 real 0m5.177s 00:09:31.255 user 0m7.519s 00:09:31.255 sys 0m0.838s 00:09:31.255 11:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.255 11:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.255 ************************************ 00:09:31.255 END TEST raid_state_function_test_sb 00:09:31.255 ************************************ 00:09:31.516 11:19:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:09:31.516 11:19:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:31.516 11:19:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.516 11:19:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:31.516 ************************************ 00:09:31.516 START TEST raid_superblock_test 00:09:31.516 ************************************ 00:09:31.516 11:19:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:09:31.516 11:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:31.516 11:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:31.516 11:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:31.516 11:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:31.516 11:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:31.516 11:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:31.516 11:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:31.516 11:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:31.516 11:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:31.516 11:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:31.516 11:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:31.516 11:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:31.516 11:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:31.516 11:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:31.516 11:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:31.516 11:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63317 00:09:31.516 11:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:31.516 11:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63317 00:09:31.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.516 11:19:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63317 ']' 00:09:31.516 11:19:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.516 11:19:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.516 11:19:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.516 11:19:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.516 11:19:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.516 [2024-11-20 11:19:14.484282] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:09:31.516 [2024-11-20 11:19:14.484503] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63317 ] 00:09:31.775 [2024-11-20 11:19:14.659922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.775 [2024-11-20 11:19:14.777999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.033 [2024-11-20 11:19:14.980649] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.033 [2024-11-20 11:19:14.980709] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.291 11:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.291 11:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:32.292 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:32.292 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:32.292 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:32.292 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:32.292 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:32.292 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:32.292 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:32.292 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:32.292 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:32.292 11:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.292 11:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.550 malloc1 00:09:32.550 11:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.550 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:32.550 11:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.550 11:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.550 [2024-11-20 11:19:15.424666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:32.550 [2024-11-20 11:19:15.424834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.550 [2024-11-20 11:19:15.424887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:32.550 [2024-11-20 11:19:15.424929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.550 [2024-11-20 11:19:15.427348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.550 [2024-11-20 11:19:15.427445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:32.550 pt1 00:09:32.550 11:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.550 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:32.550 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:32.550 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:32.550 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:32.550 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:32.550 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:32.550 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:32.550 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:32.550 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:32.550 11:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.550 11:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.550 malloc2 00:09:32.550 11:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.550 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:32.550 11:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.550 11:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.550 [2024-11-20 11:19:15.485999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:32.550 [2024-11-20 11:19:15.486064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.550 [2024-11-20 11:19:15.486087] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:32.550 [2024-11-20 11:19:15.486097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.550 [2024-11-20 11:19:15.488535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.550 [2024-11-20 11:19:15.488629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:32.550 pt2 00:09:32.551 11:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.551 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:32.551 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:32.551 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:32.551 11:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.551 11:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.551 [2024-11-20 11:19:15.498039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:32.551 [2024-11-20 11:19:15.500081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:32.551 [2024-11-20 11:19:15.500317] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:32.551 [2024-11-20 11:19:15.500341] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:32.551 [2024-11-20 11:19:15.500642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:32.551 [2024-11-20 11:19:15.500819] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:32.551 [2024-11-20 11:19:15.500836] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:32.551 [2024-11-20 11:19:15.501008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.551 11:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.551 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:32.551 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:32.551 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.551 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.551 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.551 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:32.551 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.551 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.551 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.551 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.551 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:32.551 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.551 11:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.551 11:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.551 11:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.551 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.551 "name": "raid_bdev1", 00:09:32.551 "uuid": "4913d11b-7aef-46d5-9d8b-7d2281e903ec", 00:09:32.551 "strip_size_kb": 0, 00:09:32.551 "state": "online", 00:09:32.551 "raid_level": "raid1", 00:09:32.551 "superblock": true, 00:09:32.551 "num_base_bdevs": 2, 00:09:32.551 "num_base_bdevs_discovered": 2, 00:09:32.551 "num_base_bdevs_operational": 2, 00:09:32.551 "base_bdevs_list": [ 00:09:32.551 { 00:09:32.551 "name": "pt1", 00:09:32.551 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:32.551 "is_configured": true, 00:09:32.551 "data_offset": 2048, 00:09:32.551 "data_size": 63488 00:09:32.551 }, 00:09:32.551 { 00:09:32.551 "name": "pt2", 00:09:32.551 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:32.551 "is_configured": true, 00:09:32.551 "data_offset": 2048, 00:09:32.551 "data_size": 63488 00:09:32.551 } 00:09:32.551 ] 00:09:32.551 }' 00:09:32.551 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.551 11:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.119 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:33.119 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:33.119 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:33.119 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:33.119 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:33.119 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:33.119 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:33.119 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:33.119 11:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.119 11:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.119 [2024-11-20 11:19:15.973512] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:33.119 11:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.119 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:33.119 "name": "raid_bdev1", 00:09:33.119 "aliases": [ 00:09:33.119 "4913d11b-7aef-46d5-9d8b-7d2281e903ec" 00:09:33.119 ], 00:09:33.119 "product_name": "Raid Volume", 00:09:33.119 "block_size": 512, 00:09:33.119 "num_blocks": 63488, 00:09:33.119 "uuid": "4913d11b-7aef-46d5-9d8b-7d2281e903ec", 00:09:33.119 "assigned_rate_limits": { 00:09:33.119 "rw_ios_per_sec": 0, 00:09:33.119 "rw_mbytes_per_sec": 0, 00:09:33.119 "r_mbytes_per_sec": 0, 00:09:33.119 "w_mbytes_per_sec": 0 00:09:33.119 }, 00:09:33.119 "claimed": false, 00:09:33.119 "zoned": false, 00:09:33.119 "supported_io_types": { 00:09:33.119 "read": true, 00:09:33.119 "write": true, 00:09:33.119 "unmap": false, 00:09:33.119 "flush": false, 00:09:33.119 "reset": true, 00:09:33.119 "nvme_admin": false, 00:09:33.119 "nvme_io": false, 00:09:33.119 "nvme_io_md": false, 00:09:33.119 "write_zeroes": true, 00:09:33.119 "zcopy": false, 00:09:33.119 "get_zone_info": false, 00:09:33.119 "zone_management": false, 00:09:33.119 "zone_append": false, 00:09:33.119 "compare": false, 00:09:33.119 "compare_and_write": false, 00:09:33.119 "abort": false, 00:09:33.119 "seek_hole": false, 00:09:33.119 "seek_data": false, 00:09:33.119 "copy": false, 00:09:33.119 "nvme_iov_md": false 00:09:33.119 }, 00:09:33.119 "memory_domains": [ 00:09:33.119 { 00:09:33.119 "dma_device_id": "system", 00:09:33.119 "dma_device_type": 1 00:09:33.119 }, 00:09:33.119 { 00:09:33.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.119 "dma_device_type": 2 00:09:33.119 }, 00:09:33.119 { 00:09:33.119 "dma_device_id": "system", 00:09:33.119 "dma_device_type": 1 00:09:33.119 }, 00:09:33.119 { 00:09:33.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.120 "dma_device_type": 2 00:09:33.120 } 00:09:33.120 ], 00:09:33.120 "driver_specific": { 00:09:33.120 "raid": { 00:09:33.120 "uuid": "4913d11b-7aef-46d5-9d8b-7d2281e903ec", 00:09:33.120 "strip_size_kb": 0, 00:09:33.120 "state": "online", 00:09:33.120 "raid_level": "raid1", 00:09:33.120 "superblock": true, 00:09:33.120 "num_base_bdevs": 2, 00:09:33.120 "num_base_bdevs_discovered": 2, 00:09:33.120 "num_base_bdevs_operational": 2, 00:09:33.120 "base_bdevs_list": [ 00:09:33.120 { 00:09:33.120 "name": "pt1", 00:09:33.120 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:33.120 "is_configured": true, 00:09:33.120 "data_offset": 2048, 00:09:33.120 "data_size": 63488 00:09:33.120 }, 00:09:33.120 { 00:09:33.120 "name": "pt2", 00:09:33.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:33.120 "is_configured": true, 00:09:33.120 "data_offset": 2048, 00:09:33.120 "data_size": 63488 00:09:33.120 } 00:09:33.120 ] 00:09:33.120 } 00:09:33.120 } 00:09:33.120 }' 00:09:33.120 11:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:33.120 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:33.120 pt2' 00:09:33.120 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.120 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:33.120 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.120 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.120 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:33.120 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.120 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.120 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.120 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.120 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.120 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.120 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:33.120 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.120 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.120 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.120 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.120 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.120 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.120 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:33.120 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:33.120 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.120 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.120 [2024-11-20 11:19:16.197158] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:33.120 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4913d11b-7aef-46d5-9d8b-7d2281e903ec 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4913d11b-7aef-46d5-9d8b-7d2281e903ec ']' 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.379 [2024-11-20 11:19:16.240734] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:33.379 [2024-11-20 11:19:16.240765] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:33.379 [2024-11-20 11:19:16.240862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:33.379 [2024-11-20 11:19:16.240927] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:33.379 [2024-11-20 11:19:16.240944] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.379 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.379 [2024-11-20 11:19:16.372542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:33.379 [2024-11-20 11:19:16.374599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:33.379 [2024-11-20 11:19:16.374726] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:33.379 [2024-11-20 11:19:16.374845] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:33.379 [2024-11-20 11:19:16.374916] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:33.379 [2024-11-20 11:19:16.374960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:33.379 request: 00:09:33.379 { 00:09:33.379 "name": "raid_bdev1", 00:09:33.379 "raid_level": "raid1", 00:09:33.379 "base_bdevs": [ 00:09:33.379 "malloc1", 00:09:33.379 "malloc2" 00:09:33.379 ], 00:09:33.379 "superblock": false, 00:09:33.379 "method": "bdev_raid_create", 00:09:33.379 "req_id": 1 00:09:33.379 } 00:09:33.379 Got JSON-RPC error response 00:09:33.379 response: 00:09:33.379 { 00:09:33.379 "code": -17, 00:09:33.379 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:33.379 } 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.380 [2024-11-20 11:19:16.432412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:33.380 [2024-11-20 11:19:16.432550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.380 [2024-11-20 11:19:16.432604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:33.380 [2024-11-20 11:19:16.432643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.380 [2024-11-20 11:19:16.435008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.380 [2024-11-20 11:19:16.435089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:33.380 [2024-11-20 11:19:16.435205] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:33.380 [2024-11-20 11:19:16.435321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:33.380 pt1 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.380 "name": "raid_bdev1", 00:09:33.380 "uuid": "4913d11b-7aef-46d5-9d8b-7d2281e903ec", 00:09:33.380 "strip_size_kb": 0, 00:09:33.380 "state": "configuring", 00:09:33.380 "raid_level": "raid1", 00:09:33.380 "superblock": true, 00:09:33.380 "num_base_bdevs": 2, 00:09:33.380 "num_base_bdevs_discovered": 1, 00:09:33.380 "num_base_bdevs_operational": 2, 00:09:33.380 "base_bdevs_list": [ 00:09:33.380 { 00:09:33.380 "name": "pt1", 00:09:33.380 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:33.380 "is_configured": true, 00:09:33.380 "data_offset": 2048, 00:09:33.380 "data_size": 63488 00:09:33.380 }, 00:09:33.380 { 00:09:33.380 "name": null, 00:09:33.380 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:33.380 "is_configured": false, 00:09:33.380 "data_offset": 2048, 00:09:33.380 "data_size": 63488 00:09:33.380 } 00:09:33.380 ] 00:09:33.380 }' 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.380 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.947 [2024-11-20 11:19:16.859720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:33.947 [2024-11-20 11:19:16.859884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.947 [2024-11-20 11:19:16.859915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:33.947 [2024-11-20 11:19:16.859928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.947 [2024-11-20 11:19:16.860466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.947 [2024-11-20 11:19:16.860491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:33.947 [2024-11-20 11:19:16.860585] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:33.947 [2024-11-20 11:19:16.860613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:33.947 [2024-11-20 11:19:16.860769] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:33.947 [2024-11-20 11:19:16.860783] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:33.947 [2024-11-20 11:19:16.861057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:33.947 [2024-11-20 11:19:16.861253] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:33.947 [2024-11-20 11:19:16.861265] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:33.947 [2024-11-20 11:19:16.861425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.947 pt2 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.947 "name": "raid_bdev1", 00:09:33.947 "uuid": "4913d11b-7aef-46d5-9d8b-7d2281e903ec", 00:09:33.947 "strip_size_kb": 0, 00:09:33.947 "state": "online", 00:09:33.947 "raid_level": "raid1", 00:09:33.947 "superblock": true, 00:09:33.947 "num_base_bdevs": 2, 00:09:33.947 "num_base_bdevs_discovered": 2, 00:09:33.947 "num_base_bdevs_operational": 2, 00:09:33.947 "base_bdevs_list": [ 00:09:33.947 { 00:09:33.947 "name": "pt1", 00:09:33.947 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:33.947 "is_configured": true, 00:09:33.947 "data_offset": 2048, 00:09:33.947 "data_size": 63488 00:09:33.947 }, 00:09:33.947 { 00:09:33.947 "name": "pt2", 00:09:33.947 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:33.947 "is_configured": true, 00:09:33.947 "data_offset": 2048, 00:09:33.947 "data_size": 63488 00:09:33.947 } 00:09:33.947 ] 00:09:33.947 }' 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.947 11:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.205 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:34.205 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:34.205 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:34.205 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:34.205 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:34.205 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:34.205 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:34.205 11:19:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.205 11:19:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.205 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:34.205 [2024-11-20 11:19:17.283431] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.205 11:19:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:34.464 "name": "raid_bdev1", 00:09:34.464 "aliases": [ 00:09:34.464 "4913d11b-7aef-46d5-9d8b-7d2281e903ec" 00:09:34.464 ], 00:09:34.464 "product_name": "Raid Volume", 00:09:34.464 "block_size": 512, 00:09:34.464 "num_blocks": 63488, 00:09:34.464 "uuid": "4913d11b-7aef-46d5-9d8b-7d2281e903ec", 00:09:34.464 "assigned_rate_limits": { 00:09:34.464 "rw_ios_per_sec": 0, 00:09:34.464 "rw_mbytes_per_sec": 0, 00:09:34.464 "r_mbytes_per_sec": 0, 00:09:34.464 "w_mbytes_per_sec": 0 00:09:34.464 }, 00:09:34.464 "claimed": false, 00:09:34.464 "zoned": false, 00:09:34.464 "supported_io_types": { 00:09:34.464 "read": true, 00:09:34.464 "write": true, 00:09:34.464 "unmap": false, 00:09:34.464 "flush": false, 00:09:34.464 "reset": true, 00:09:34.464 "nvme_admin": false, 00:09:34.464 "nvme_io": false, 00:09:34.464 "nvme_io_md": false, 00:09:34.464 "write_zeroes": true, 00:09:34.464 "zcopy": false, 00:09:34.464 "get_zone_info": false, 00:09:34.464 "zone_management": false, 00:09:34.464 "zone_append": false, 00:09:34.464 "compare": false, 00:09:34.464 "compare_and_write": false, 00:09:34.464 "abort": false, 00:09:34.464 "seek_hole": false, 00:09:34.464 "seek_data": false, 00:09:34.464 "copy": false, 00:09:34.464 "nvme_iov_md": false 00:09:34.464 }, 00:09:34.464 "memory_domains": [ 00:09:34.464 { 00:09:34.464 "dma_device_id": "system", 00:09:34.464 "dma_device_type": 1 00:09:34.464 }, 00:09:34.464 { 00:09:34.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.464 "dma_device_type": 2 00:09:34.464 }, 00:09:34.464 { 00:09:34.464 "dma_device_id": "system", 00:09:34.464 "dma_device_type": 1 00:09:34.464 }, 00:09:34.464 { 00:09:34.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.464 "dma_device_type": 2 00:09:34.464 } 00:09:34.464 ], 00:09:34.464 "driver_specific": { 00:09:34.464 "raid": { 00:09:34.464 "uuid": "4913d11b-7aef-46d5-9d8b-7d2281e903ec", 00:09:34.464 "strip_size_kb": 0, 00:09:34.464 "state": "online", 00:09:34.464 "raid_level": "raid1", 00:09:34.464 "superblock": true, 00:09:34.464 "num_base_bdevs": 2, 00:09:34.464 "num_base_bdevs_discovered": 2, 00:09:34.464 "num_base_bdevs_operational": 2, 00:09:34.464 "base_bdevs_list": [ 00:09:34.464 { 00:09:34.464 "name": "pt1", 00:09:34.464 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:34.464 "is_configured": true, 00:09:34.464 "data_offset": 2048, 00:09:34.464 "data_size": 63488 00:09:34.464 }, 00:09:34.464 { 00:09:34.464 "name": "pt2", 00:09:34.464 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.464 "is_configured": true, 00:09:34.464 "data_offset": 2048, 00:09:34.464 "data_size": 63488 00:09:34.464 } 00:09:34.464 ] 00:09:34.464 } 00:09:34.464 } 00:09:34.464 }' 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:34.464 pt2' 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.464 [2024-11-20 11:19:17.511052] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4913d11b-7aef-46d5-9d8b-7d2281e903ec '!=' 4913d11b-7aef-46d5-9d8b-7d2281e903ec ']' 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.464 [2024-11-20 11:19:17.558775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.464 11:19:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.743 11:19:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.743 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.743 "name": "raid_bdev1", 00:09:34.743 "uuid": "4913d11b-7aef-46d5-9d8b-7d2281e903ec", 00:09:34.743 "strip_size_kb": 0, 00:09:34.743 "state": "online", 00:09:34.743 "raid_level": "raid1", 00:09:34.743 "superblock": true, 00:09:34.743 "num_base_bdevs": 2, 00:09:34.743 "num_base_bdevs_discovered": 1, 00:09:34.743 "num_base_bdevs_operational": 1, 00:09:34.743 "base_bdevs_list": [ 00:09:34.743 { 00:09:34.743 "name": null, 00:09:34.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.743 "is_configured": false, 00:09:34.743 "data_offset": 0, 00:09:34.743 "data_size": 63488 00:09:34.743 }, 00:09:34.743 { 00:09:34.743 "name": "pt2", 00:09:34.743 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.743 "is_configured": true, 00:09:34.743 "data_offset": 2048, 00:09:34.743 "data_size": 63488 00:09:34.743 } 00:09:34.743 ] 00:09:34.743 }' 00:09:34.743 11:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.743 11:19:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.000 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:35.000 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.000 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.000 [2024-11-20 11:19:18.021930] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:35.000 [2024-11-20 11:19:18.021965] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.000 [2024-11-20 11:19:18.022057] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.000 [2024-11-20 11:19:18.022114] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.000 [2024-11-20 11:19:18.022127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:35.000 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.000 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.000 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:35.000 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.000 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.000 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.000 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:35.000 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:35.000 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:35.000 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:35.000 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:35.000 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.000 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.000 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.000 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:35.000 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:35.000 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:35.000 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:35.000 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:09:35.000 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:35.000 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.000 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.001 [2024-11-20 11:19:18.093835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:35.001 [2024-11-20 11:19:18.094017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.001 [2024-11-20 11:19:18.094065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:35.001 [2024-11-20 11:19:18.094106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.001 [2024-11-20 11:19:18.096746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.001 [2024-11-20 11:19:18.096868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:35.001 [2024-11-20 11:19:18.097028] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:35.001 [2024-11-20 11:19:18.097126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:35.001 [2024-11-20 11:19:18.097320] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:35.001 [2024-11-20 11:19:18.097376] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:35.001 [2024-11-20 11:19:18.097706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:35.001 [2024-11-20 11:19:18.097931] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:35.001 [2024-11-20 11:19:18.097948] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:35.001 [2024-11-20 11:19:18.098173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.001 pt2 00:09:35.001 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.001 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:35.001 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:35.001 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.001 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.001 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.001 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:35.001 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.001 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.001 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.001 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.001 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.001 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.001 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.001 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.259 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.259 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.259 "name": "raid_bdev1", 00:09:35.259 "uuid": "4913d11b-7aef-46d5-9d8b-7d2281e903ec", 00:09:35.259 "strip_size_kb": 0, 00:09:35.259 "state": "online", 00:09:35.259 "raid_level": "raid1", 00:09:35.259 "superblock": true, 00:09:35.259 "num_base_bdevs": 2, 00:09:35.259 "num_base_bdevs_discovered": 1, 00:09:35.259 "num_base_bdevs_operational": 1, 00:09:35.259 "base_bdevs_list": [ 00:09:35.259 { 00:09:35.259 "name": null, 00:09:35.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.259 "is_configured": false, 00:09:35.259 "data_offset": 2048, 00:09:35.259 "data_size": 63488 00:09:35.259 }, 00:09:35.259 { 00:09:35.259 "name": "pt2", 00:09:35.259 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.259 "is_configured": true, 00:09:35.259 "data_offset": 2048, 00:09:35.259 "data_size": 63488 00:09:35.259 } 00:09:35.259 ] 00:09:35.259 }' 00:09:35.259 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.259 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.516 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:35.516 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.516 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.516 [2024-11-20 11:19:18.561323] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:35.516 [2024-11-20 11:19:18.561414] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.516 [2024-11-20 11:19:18.561549] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.516 [2024-11-20 11:19:18.561640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.516 [2024-11-20 11:19:18.561693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:35.516 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.516 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.516 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:35.516 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.516 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.516 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.516 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:35.516 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:35.516 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:09:35.516 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:35.516 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.516 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.516 [2024-11-20 11:19:18.625278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:35.516 [2024-11-20 11:19:18.625362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.516 [2024-11-20 11:19:18.625386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:09:35.516 [2024-11-20 11:19:18.625396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.516 [2024-11-20 11:19:18.627970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.516 [2024-11-20 11:19:18.628023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:35.517 [2024-11-20 11:19:18.628134] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:35.517 [2024-11-20 11:19:18.628193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:35.517 [2024-11-20 11:19:18.628361] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:35.517 [2024-11-20 11:19:18.628373] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:35.517 [2024-11-20 11:19:18.628406] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:35.517 [2024-11-20 11:19:18.628487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:35.517 [2024-11-20 11:19:18.628582] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:35.517 [2024-11-20 11:19:18.628598] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:35.517 [2024-11-20 11:19:18.628890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:35.517 [2024-11-20 11:19:18.629046] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:35.517 [2024-11-20 11:19:18.629060] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:35.517 [2024-11-20 11:19:18.629282] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.517 pt1 00:09:35.774 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.774 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:09:35.774 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:35.774 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:35.774 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.774 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.775 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.775 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:35.775 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.775 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.775 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.775 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.775 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.775 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.775 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.775 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.775 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.775 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.775 "name": "raid_bdev1", 00:09:35.775 "uuid": "4913d11b-7aef-46d5-9d8b-7d2281e903ec", 00:09:35.775 "strip_size_kb": 0, 00:09:35.775 "state": "online", 00:09:35.775 "raid_level": "raid1", 00:09:35.775 "superblock": true, 00:09:35.775 "num_base_bdevs": 2, 00:09:35.775 "num_base_bdevs_discovered": 1, 00:09:35.775 "num_base_bdevs_operational": 1, 00:09:35.775 "base_bdevs_list": [ 00:09:35.775 { 00:09:35.775 "name": null, 00:09:35.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.775 "is_configured": false, 00:09:35.775 "data_offset": 2048, 00:09:35.775 "data_size": 63488 00:09:35.775 }, 00:09:35.775 { 00:09:35.775 "name": "pt2", 00:09:35.775 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.775 "is_configured": true, 00:09:35.775 "data_offset": 2048, 00:09:35.775 "data_size": 63488 00:09:35.775 } 00:09:35.775 ] 00:09:35.775 }' 00:09:35.775 11:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.775 11:19:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.034 11:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:36.034 11:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:36.034 11:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.034 11:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.035 11:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.035 11:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:36.035 11:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:36.035 11:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.035 11:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.035 11:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:36.035 [2024-11-20 11:19:19.112698] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.035 11:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.292 11:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4913d11b-7aef-46d5-9d8b-7d2281e903ec '!=' 4913d11b-7aef-46d5-9d8b-7d2281e903ec ']' 00:09:36.292 11:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63317 00:09:36.292 11:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63317 ']' 00:09:36.292 11:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63317 00:09:36.292 11:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:36.292 11:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.293 11:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63317 00:09:36.293 killing process with pid 63317 00:09:36.293 11:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.293 11:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.293 11:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63317' 00:09:36.293 11:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63317 00:09:36.293 [2024-11-20 11:19:19.199548] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:36.293 11:19:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63317 00:09:36.293 [2024-11-20 11:19:19.199658] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.293 [2024-11-20 11:19:19.199712] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.293 [2024-11-20 11:19:19.199726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:36.561 [2024-11-20 11:19:19.446324] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:37.975 11:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:37.975 00:09:37.975 real 0m6.285s 00:09:37.975 user 0m9.472s 00:09:37.975 sys 0m1.026s 00:09:37.975 11:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.975 ************************************ 00:09:37.975 END TEST raid_superblock_test 00:09:37.975 ************************************ 00:09:37.975 11:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.975 11:19:20 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:09:37.975 11:19:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:37.975 11:19:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.975 11:19:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:37.975 ************************************ 00:09:37.975 START TEST raid_read_error_test 00:09:37.975 ************************************ 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.N7fBzKGO17 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63647 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63647 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63647 ']' 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.975 11:19:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.975 [2024-11-20 11:19:20.846218] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:09:37.975 [2024-11-20 11:19:20.846377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63647 ] 00:09:37.975 [2024-11-20 11:19:21.023174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.235 [2024-11-20 11:19:21.148417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.493 [2024-11-20 11:19:21.376047] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.493 [2024-11-20 11:19:21.376119] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.752 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.752 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.753 BaseBdev1_malloc 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.753 true 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.753 [2024-11-20 11:19:21.755603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:38.753 [2024-11-20 11:19:21.755656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.753 [2024-11-20 11:19:21.755676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:38.753 [2024-11-20 11:19:21.755688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.753 [2024-11-20 11:19:21.757882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.753 [2024-11-20 11:19:21.758008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:38.753 BaseBdev1 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.753 BaseBdev2_malloc 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.753 true 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.753 [2024-11-20 11:19:21.824605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:38.753 [2024-11-20 11:19:21.824670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.753 [2024-11-20 11:19:21.824689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:38.753 [2024-11-20 11:19:21.824702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.753 [2024-11-20 11:19:21.827036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.753 [2024-11-20 11:19:21.827084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:38.753 BaseBdev2 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.753 [2024-11-20 11:19:21.836725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.753 [2024-11-20 11:19:21.838856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.753 [2024-11-20 11:19:21.839111] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:38.753 [2024-11-20 11:19:21.839129] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:38.753 [2024-11-20 11:19:21.839438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:38.753 [2024-11-20 11:19:21.839712] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:38.753 [2024-11-20 11:19:21.839726] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:38.753 [2024-11-20 11:19:21.839919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.753 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.012 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.012 11:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.012 "name": "raid_bdev1", 00:09:39.012 "uuid": "115a13a5-ea0c-4c93-9f17-ef22643cfb4e", 00:09:39.012 "strip_size_kb": 0, 00:09:39.012 "state": "online", 00:09:39.012 "raid_level": "raid1", 00:09:39.012 "superblock": true, 00:09:39.012 "num_base_bdevs": 2, 00:09:39.012 "num_base_bdevs_discovered": 2, 00:09:39.012 "num_base_bdevs_operational": 2, 00:09:39.012 "base_bdevs_list": [ 00:09:39.012 { 00:09:39.012 "name": "BaseBdev1", 00:09:39.012 "uuid": "57b6862b-ab2a-5a4e-a3fb-01bc21de5e34", 00:09:39.012 "is_configured": true, 00:09:39.012 "data_offset": 2048, 00:09:39.012 "data_size": 63488 00:09:39.012 }, 00:09:39.012 { 00:09:39.012 "name": "BaseBdev2", 00:09:39.012 "uuid": "c86ffb7c-b123-5d5c-af52-459306da50bc", 00:09:39.012 "is_configured": true, 00:09:39.012 "data_offset": 2048, 00:09:39.012 "data_size": 63488 00:09:39.012 } 00:09:39.012 ] 00:09:39.012 }' 00:09:39.012 11:19:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.012 11:19:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.283 11:19:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:39.283 11:19:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:39.283 [2024-11-20 11:19:22.373163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:40.221 11:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:40.221 11:19:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.221 11:19:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.221 11:19:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.221 11:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:40.221 11:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:40.221 11:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:40.221 11:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:40.221 11:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:40.221 11:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.221 11:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.221 11:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.221 11:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.221 11:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:40.221 11:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.221 11:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.221 11:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.221 11:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.221 11:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.221 11:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.221 11:19:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.221 11:19:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.221 11:19:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.481 11:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.481 "name": "raid_bdev1", 00:09:40.481 "uuid": "115a13a5-ea0c-4c93-9f17-ef22643cfb4e", 00:09:40.481 "strip_size_kb": 0, 00:09:40.481 "state": "online", 00:09:40.481 "raid_level": "raid1", 00:09:40.481 "superblock": true, 00:09:40.481 "num_base_bdevs": 2, 00:09:40.481 "num_base_bdevs_discovered": 2, 00:09:40.481 "num_base_bdevs_operational": 2, 00:09:40.481 "base_bdevs_list": [ 00:09:40.481 { 00:09:40.481 "name": "BaseBdev1", 00:09:40.481 "uuid": "57b6862b-ab2a-5a4e-a3fb-01bc21de5e34", 00:09:40.481 "is_configured": true, 00:09:40.481 "data_offset": 2048, 00:09:40.481 "data_size": 63488 00:09:40.481 }, 00:09:40.481 { 00:09:40.481 "name": "BaseBdev2", 00:09:40.481 "uuid": "c86ffb7c-b123-5d5c-af52-459306da50bc", 00:09:40.481 "is_configured": true, 00:09:40.481 "data_offset": 2048, 00:09:40.481 "data_size": 63488 00:09:40.481 } 00:09:40.481 ] 00:09:40.481 }' 00:09:40.481 11:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.481 11:19:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.741 11:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:40.741 11:19:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.741 11:19:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.741 [2024-11-20 11:19:23.745995] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:40.741 [2024-11-20 11:19:23.746035] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.741 [2024-11-20 11:19:23.749139] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.741 [2024-11-20 11:19:23.749266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.741 [2024-11-20 11:19:23.749368] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.741 [2024-11-20 11:19:23.749383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:40.741 { 00:09:40.741 "results": [ 00:09:40.741 { 00:09:40.741 "job": "raid_bdev1", 00:09:40.741 "core_mask": "0x1", 00:09:40.741 "workload": "randrw", 00:09:40.741 "percentage": 50, 00:09:40.741 "status": "finished", 00:09:40.741 "queue_depth": 1, 00:09:40.741 "io_size": 131072, 00:09:40.741 "runtime": 1.373639, 00:09:40.741 "iops": 16034.052614988363, 00:09:40.741 "mibps": 2004.2565768735453, 00:09:40.741 "io_failed": 0, 00:09:40.741 "io_timeout": 0, 00:09:40.741 "avg_latency_us": 59.49079174617966, 00:09:40.741 "min_latency_us": 23.811353711790392, 00:09:40.741 "max_latency_us": 1752.8733624454148 00:09:40.741 } 00:09:40.741 ], 00:09:40.741 "core_count": 1 00:09:40.741 } 00:09:40.741 11:19:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.741 11:19:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63647 00:09:40.741 11:19:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63647 ']' 00:09:40.741 11:19:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63647 00:09:40.741 11:19:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:40.741 11:19:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.741 11:19:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63647 00:09:40.741 killing process with pid 63647 00:09:40.741 11:19:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.741 11:19:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.741 11:19:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63647' 00:09:40.741 11:19:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63647 00:09:40.741 11:19:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63647 00:09:40.741 [2024-11-20 11:19:23.790205] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:41.001 [2024-11-20 11:19:23.942163] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:42.381 11:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.N7fBzKGO17 00:09:42.381 11:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:42.381 11:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:42.381 11:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:42.381 11:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:42.381 11:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:42.381 11:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:42.381 ************************************ 00:09:42.381 END TEST raid_read_error_test 00:09:42.381 ************************************ 00:09:42.381 11:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:42.381 00:09:42.381 real 0m4.570s 00:09:42.381 user 0m5.471s 00:09:42.381 sys 0m0.545s 00:09:42.381 11:19:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.381 11:19:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.381 11:19:25 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:09:42.381 11:19:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:42.381 11:19:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.381 11:19:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:42.381 ************************************ 00:09:42.381 START TEST raid_write_error_test 00:09:42.381 ************************************ 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4x9NbvAzdB 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63791 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:42.381 11:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63791 00:09:42.382 11:19:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63791 ']' 00:09:42.382 11:19:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.382 11:19:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.382 11:19:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.382 11:19:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.382 11:19:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.642 [2024-11-20 11:19:25.496664] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:09:42.642 [2024-11-20 11:19:25.496813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63791 ] 00:09:42.642 [2024-11-20 11:19:25.674540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.901 [2024-11-20 11:19:25.803440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.159 [2024-11-20 11:19:26.036546] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.159 [2024-11-20 11:19:26.036621] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.419 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.419 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:43.419 11:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:43.419 11:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:43.419 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.419 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.419 BaseBdev1_malloc 00:09:43.419 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.419 11:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:43.419 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.419 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.419 true 00:09:43.419 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.419 11:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:43.419 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.419 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.419 [2024-11-20 11:19:26.487004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:43.419 [2024-11-20 11:19:26.487088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.419 [2024-11-20 11:19:26.487117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:43.419 [2024-11-20 11:19:26.487131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.419 [2024-11-20 11:19:26.489731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.419 [2024-11-20 11:19:26.489786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:43.419 BaseBdev1 00:09:43.419 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.419 11:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:43.419 11:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:43.419 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.419 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.679 BaseBdev2_malloc 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.679 true 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.679 [2024-11-20 11:19:26.560375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:43.679 [2024-11-20 11:19:26.560571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.679 [2024-11-20 11:19:26.560601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:43.679 [2024-11-20 11:19:26.560615] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.679 [2024-11-20 11:19:26.563229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.679 [2024-11-20 11:19:26.563291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:43.679 BaseBdev2 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.679 [2024-11-20 11:19:26.572470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:43.679 [2024-11-20 11:19:26.574720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:43.679 [2024-11-20 11:19:26.575092] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:43.679 [2024-11-20 11:19:26.575119] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:43.679 [2024-11-20 11:19:26.575518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:43.679 [2024-11-20 11:19:26.575747] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:43.679 [2024-11-20 11:19:26.575767] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:43.679 [2024-11-20 11:19:26.575980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.679 "name": "raid_bdev1", 00:09:43.679 "uuid": "3e699cb2-9d52-4f34-ade7-263c41caed69", 00:09:43.679 "strip_size_kb": 0, 00:09:43.679 "state": "online", 00:09:43.679 "raid_level": "raid1", 00:09:43.679 "superblock": true, 00:09:43.679 "num_base_bdevs": 2, 00:09:43.679 "num_base_bdevs_discovered": 2, 00:09:43.679 "num_base_bdevs_operational": 2, 00:09:43.679 "base_bdevs_list": [ 00:09:43.679 { 00:09:43.679 "name": "BaseBdev1", 00:09:43.679 "uuid": "464df3c3-5c34-54c3-bdaf-c70be18d0506", 00:09:43.679 "is_configured": true, 00:09:43.679 "data_offset": 2048, 00:09:43.679 "data_size": 63488 00:09:43.679 }, 00:09:43.679 { 00:09:43.679 "name": "BaseBdev2", 00:09:43.679 "uuid": "3cd798bd-4480-5a09-8d4c-6fde42b283da", 00:09:43.679 "is_configured": true, 00:09:43.679 "data_offset": 2048, 00:09:43.679 "data_size": 63488 00:09:43.679 } 00:09:43.679 ] 00:09:43.679 }' 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.679 11:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.938 11:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:43.938 11:19:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:44.198 [2024-11-20 11:19:27.125197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.137 [2024-11-20 11:19:28.029665] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:45.137 [2024-11-20 11:19:28.029815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:45.137 [2024-11-20 11:19:28.030040] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.137 "name": "raid_bdev1", 00:09:45.137 "uuid": "3e699cb2-9d52-4f34-ade7-263c41caed69", 00:09:45.137 "strip_size_kb": 0, 00:09:45.137 "state": "online", 00:09:45.137 "raid_level": "raid1", 00:09:45.137 "superblock": true, 00:09:45.137 "num_base_bdevs": 2, 00:09:45.137 "num_base_bdevs_discovered": 1, 00:09:45.137 "num_base_bdevs_operational": 1, 00:09:45.137 "base_bdevs_list": [ 00:09:45.137 { 00:09:45.137 "name": null, 00:09:45.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.137 "is_configured": false, 00:09:45.137 "data_offset": 0, 00:09:45.137 "data_size": 63488 00:09:45.137 }, 00:09:45.137 { 00:09:45.137 "name": "BaseBdev2", 00:09:45.137 "uuid": "3cd798bd-4480-5a09-8d4c-6fde42b283da", 00:09:45.137 "is_configured": true, 00:09:45.137 "data_offset": 2048, 00:09:45.137 "data_size": 63488 00:09:45.137 } 00:09:45.137 ] 00:09:45.137 }' 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.137 11:19:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.397 11:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:45.397 11:19:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.397 11:19:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.397 [2024-11-20 11:19:28.479967] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.397 [2024-11-20 11:19:28.480004] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.397 [2024-11-20 11:19:28.483212] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.397 [2024-11-20 11:19:28.483298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.397 [2024-11-20 11:19:28.483394] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.397 [2024-11-20 11:19:28.483448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:45.397 { 00:09:45.397 "results": [ 00:09:45.397 { 00:09:45.397 "job": "raid_bdev1", 00:09:45.397 "core_mask": "0x1", 00:09:45.397 "workload": "randrw", 00:09:45.397 "percentage": 50, 00:09:45.397 "status": "finished", 00:09:45.397 "queue_depth": 1, 00:09:45.397 "io_size": 131072, 00:09:45.397 "runtime": 1.355376, 00:09:45.397 "iops": 19041.948507277684, 00:09:45.397 "mibps": 2380.2435634097105, 00:09:45.397 "io_failed": 0, 00:09:45.397 "io_timeout": 0, 00:09:45.397 "avg_latency_us": 49.702639392744246, 00:09:45.397 "min_latency_us": 23.58777292576419, 00:09:45.397 "max_latency_us": 1438.071615720524 00:09:45.397 } 00:09:45.397 ], 00:09:45.397 "core_count": 1 00:09:45.397 } 00:09:45.397 11:19:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.397 11:19:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63791 00:09:45.397 11:19:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63791 ']' 00:09:45.397 11:19:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63791 00:09:45.397 11:19:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:45.397 11:19:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.397 11:19:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63791 00:09:45.658 11:19:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:45.658 11:19:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:45.658 killing process with pid 63791 00:09:45.658 11:19:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63791' 00:09:45.658 11:19:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63791 00:09:45.658 [2024-11-20 11:19:28.533036] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:45.658 11:19:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63791 00:09:45.658 [2024-11-20 11:19:28.689011] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:47.038 11:19:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:47.038 11:19:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4x9NbvAzdB 00:09:47.038 11:19:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:47.038 11:19:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:47.038 11:19:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:47.038 11:19:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:47.038 11:19:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:47.038 11:19:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:47.038 00:09:47.038 real 0m4.603s 00:09:47.038 user 0m5.557s 00:09:47.038 sys 0m0.551s 00:09:47.038 ************************************ 00:09:47.038 END TEST raid_write_error_test 00:09:47.038 ************************************ 00:09:47.038 11:19:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.038 11:19:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.038 11:19:30 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:47.038 11:19:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:47.038 11:19:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:09:47.038 11:19:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:47.038 11:19:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.038 11:19:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:47.038 ************************************ 00:09:47.038 START TEST raid_state_function_test 00:09:47.038 ************************************ 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:47.038 Process raid pid: 63936 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63936 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63936' 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63936 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63936 ']' 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.038 11:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.296 [2024-11-20 11:19:30.152420] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:09:47.296 [2024-11-20 11:19:30.152559] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.296 [2024-11-20 11:19:30.333302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.555 [2024-11-20 11:19:30.470016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.813 [2024-11-20 11:19:30.703323] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.813 [2024-11-20 11:19:30.703373] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.071 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.071 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:48.071 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:48.071 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.071 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.071 [2024-11-20 11:19:31.048066] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:48.071 [2024-11-20 11:19:31.048136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:48.071 [2024-11-20 11:19:31.048149] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:48.071 [2024-11-20 11:19:31.048161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:48.071 [2024-11-20 11:19:31.048169] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:48.071 [2024-11-20 11:19:31.048180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:48.071 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.071 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:48.071 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.071 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.071 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:48.071 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.071 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.071 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.071 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.071 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.071 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.071 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.071 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.071 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.071 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.071 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.071 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.071 "name": "Existed_Raid", 00:09:48.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.071 "strip_size_kb": 64, 00:09:48.071 "state": "configuring", 00:09:48.071 "raid_level": "raid0", 00:09:48.071 "superblock": false, 00:09:48.071 "num_base_bdevs": 3, 00:09:48.071 "num_base_bdevs_discovered": 0, 00:09:48.071 "num_base_bdevs_operational": 3, 00:09:48.071 "base_bdevs_list": [ 00:09:48.071 { 00:09:48.071 "name": "BaseBdev1", 00:09:48.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.071 "is_configured": false, 00:09:48.071 "data_offset": 0, 00:09:48.071 "data_size": 0 00:09:48.071 }, 00:09:48.071 { 00:09:48.071 "name": "BaseBdev2", 00:09:48.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.071 "is_configured": false, 00:09:48.071 "data_offset": 0, 00:09:48.071 "data_size": 0 00:09:48.071 }, 00:09:48.071 { 00:09:48.071 "name": "BaseBdev3", 00:09:48.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.071 "is_configured": false, 00:09:48.071 "data_offset": 0, 00:09:48.071 "data_size": 0 00:09:48.072 } 00:09:48.072 ] 00:09:48.072 }' 00:09:48.072 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.072 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.638 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:48.638 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.638 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.638 [2024-11-20 11:19:31.467544] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:48.638 [2024-11-20 11:19:31.467672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:48.638 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.638 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:48.638 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.638 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.638 [2024-11-20 11:19:31.479566] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:48.638 [2024-11-20 11:19:31.479709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:48.638 [2024-11-20 11:19:31.479743] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:48.638 [2024-11-20 11:19:31.479770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:48.638 [2024-11-20 11:19:31.479792] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:48.638 [2024-11-20 11:19:31.479818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:48.638 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.638 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:48.638 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.638 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.638 [2024-11-20 11:19:31.533346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.638 BaseBdev1 00:09:48.638 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.638 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.639 [ 00:09:48.639 { 00:09:48.639 "name": "BaseBdev1", 00:09:48.639 "aliases": [ 00:09:48.639 "92c21401-1f3b-479d-8371-05b9c01f1fea" 00:09:48.639 ], 00:09:48.639 "product_name": "Malloc disk", 00:09:48.639 "block_size": 512, 00:09:48.639 "num_blocks": 65536, 00:09:48.639 "uuid": "92c21401-1f3b-479d-8371-05b9c01f1fea", 00:09:48.639 "assigned_rate_limits": { 00:09:48.639 "rw_ios_per_sec": 0, 00:09:48.639 "rw_mbytes_per_sec": 0, 00:09:48.639 "r_mbytes_per_sec": 0, 00:09:48.639 "w_mbytes_per_sec": 0 00:09:48.639 }, 00:09:48.639 "claimed": true, 00:09:48.639 "claim_type": "exclusive_write", 00:09:48.639 "zoned": false, 00:09:48.639 "supported_io_types": { 00:09:48.639 "read": true, 00:09:48.639 "write": true, 00:09:48.639 "unmap": true, 00:09:48.639 "flush": true, 00:09:48.639 "reset": true, 00:09:48.639 "nvme_admin": false, 00:09:48.639 "nvme_io": false, 00:09:48.639 "nvme_io_md": false, 00:09:48.639 "write_zeroes": true, 00:09:48.639 "zcopy": true, 00:09:48.639 "get_zone_info": false, 00:09:48.639 "zone_management": false, 00:09:48.639 "zone_append": false, 00:09:48.639 "compare": false, 00:09:48.639 "compare_and_write": false, 00:09:48.639 "abort": true, 00:09:48.639 "seek_hole": false, 00:09:48.639 "seek_data": false, 00:09:48.639 "copy": true, 00:09:48.639 "nvme_iov_md": false 00:09:48.639 }, 00:09:48.639 "memory_domains": [ 00:09:48.639 { 00:09:48.639 "dma_device_id": "system", 00:09:48.639 "dma_device_type": 1 00:09:48.639 }, 00:09:48.639 { 00:09:48.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.639 "dma_device_type": 2 00:09:48.639 } 00:09:48.639 ], 00:09:48.639 "driver_specific": {} 00:09:48.639 } 00:09:48.639 ] 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.639 "name": "Existed_Raid", 00:09:48.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.639 "strip_size_kb": 64, 00:09:48.639 "state": "configuring", 00:09:48.639 "raid_level": "raid0", 00:09:48.639 "superblock": false, 00:09:48.639 "num_base_bdevs": 3, 00:09:48.639 "num_base_bdevs_discovered": 1, 00:09:48.639 "num_base_bdevs_operational": 3, 00:09:48.639 "base_bdevs_list": [ 00:09:48.639 { 00:09:48.639 "name": "BaseBdev1", 00:09:48.639 "uuid": "92c21401-1f3b-479d-8371-05b9c01f1fea", 00:09:48.639 "is_configured": true, 00:09:48.639 "data_offset": 0, 00:09:48.639 "data_size": 65536 00:09:48.639 }, 00:09:48.639 { 00:09:48.639 "name": "BaseBdev2", 00:09:48.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.639 "is_configured": false, 00:09:48.639 "data_offset": 0, 00:09:48.639 "data_size": 0 00:09:48.639 }, 00:09:48.639 { 00:09:48.639 "name": "BaseBdev3", 00:09:48.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.639 "is_configured": false, 00:09:48.639 "data_offset": 0, 00:09:48.639 "data_size": 0 00:09:48.639 } 00:09:48.639 ] 00:09:48.639 }' 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.639 11:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.207 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:49.207 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.207 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.207 [2024-11-20 11:19:32.024629] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:49.208 [2024-11-20 11:19:32.024711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:49.208 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.208 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:49.208 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.208 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.208 [2024-11-20 11:19:32.036681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:49.208 [2024-11-20 11:19:32.038854] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:49.208 [2024-11-20 11:19:32.038922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:49.208 [2024-11-20 11:19:32.038934] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:49.208 [2024-11-20 11:19:32.038946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:49.208 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.208 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:49.208 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:49.208 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:49.208 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.208 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.208 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:49.208 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.208 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.208 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.208 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.208 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.208 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.208 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.208 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.208 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.208 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.208 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.208 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.208 "name": "Existed_Raid", 00:09:49.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.208 "strip_size_kb": 64, 00:09:49.208 "state": "configuring", 00:09:49.208 "raid_level": "raid0", 00:09:49.208 "superblock": false, 00:09:49.208 "num_base_bdevs": 3, 00:09:49.208 "num_base_bdevs_discovered": 1, 00:09:49.208 "num_base_bdevs_operational": 3, 00:09:49.208 "base_bdevs_list": [ 00:09:49.208 { 00:09:49.208 "name": "BaseBdev1", 00:09:49.208 "uuid": "92c21401-1f3b-479d-8371-05b9c01f1fea", 00:09:49.208 "is_configured": true, 00:09:49.208 "data_offset": 0, 00:09:49.208 "data_size": 65536 00:09:49.208 }, 00:09:49.208 { 00:09:49.208 "name": "BaseBdev2", 00:09:49.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.208 "is_configured": false, 00:09:49.208 "data_offset": 0, 00:09:49.208 "data_size": 0 00:09:49.208 }, 00:09:49.208 { 00:09:49.208 "name": "BaseBdev3", 00:09:49.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.208 "is_configured": false, 00:09:49.208 "data_offset": 0, 00:09:49.208 "data_size": 0 00:09:49.208 } 00:09:49.208 ] 00:09:49.208 }' 00:09:49.208 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.208 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.489 [2024-11-20 11:19:32.513263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.489 BaseBdev2 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.489 [ 00:09:49.489 { 00:09:49.489 "name": "BaseBdev2", 00:09:49.489 "aliases": [ 00:09:49.489 "beb16e5f-3490-4273-a051-fa2d5c9f89bf" 00:09:49.489 ], 00:09:49.489 "product_name": "Malloc disk", 00:09:49.489 "block_size": 512, 00:09:49.489 "num_blocks": 65536, 00:09:49.489 "uuid": "beb16e5f-3490-4273-a051-fa2d5c9f89bf", 00:09:49.489 "assigned_rate_limits": { 00:09:49.489 "rw_ios_per_sec": 0, 00:09:49.489 "rw_mbytes_per_sec": 0, 00:09:49.489 "r_mbytes_per_sec": 0, 00:09:49.489 "w_mbytes_per_sec": 0 00:09:49.489 }, 00:09:49.489 "claimed": true, 00:09:49.489 "claim_type": "exclusive_write", 00:09:49.489 "zoned": false, 00:09:49.489 "supported_io_types": { 00:09:49.489 "read": true, 00:09:49.489 "write": true, 00:09:49.489 "unmap": true, 00:09:49.489 "flush": true, 00:09:49.489 "reset": true, 00:09:49.489 "nvme_admin": false, 00:09:49.489 "nvme_io": false, 00:09:49.489 "nvme_io_md": false, 00:09:49.489 "write_zeroes": true, 00:09:49.489 "zcopy": true, 00:09:49.489 "get_zone_info": false, 00:09:49.489 "zone_management": false, 00:09:49.489 "zone_append": false, 00:09:49.489 "compare": false, 00:09:49.489 "compare_and_write": false, 00:09:49.489 "abort": true, 00:09:49.489 "seek_hole": false, 00:09:49.489 "seek_data": false, 00:09:49.489 "copy": true, 00:09:49.489 "nvme_iov_md": false 00:09:49.489 }, 00:09:49.489 "memory_domains": [ 00:09:49.489 { 00:09:49.489 "dma_device_id": "system", 00:09:49.489 "dma_device_type": 1 00:09:49.489 }, 00:09:49.489 { 00:09:49.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.489 "dma_device_type": 2 00:09:49.489 } 00:09:49.489 ], 00:09:49.489 "driver_specific": {} 00:09:49.489 } 00:09:49.489 ] 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.489 "name": "Existed_Raid", 00:09:49.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.489 "strip_size_kb": 64, 00:09:49.489 "state": "configuring", 00:09:49.489 "raid_level": "raid0", 00:09:49.489 "superblock": false, 00:09:49.489 "num_base_bdevs": 3, 00:09:49.489 "num_base_bdevs_discovered": 2, 00:09:49.489 "num_base_bdevs_operational": 3, 00:09:49.489 "base_bdevs_list": [ 00:09:49.489 { 00:09:49.489 "name": "BaseBdev1", 00:09:49.489 "uuid": "92c21401-1f3b-479d-8371-05b9c01f1fea", 00:09:49.489 "is_configured": true, 00:09:49.489 "data_offset": 0, 00:09:49.489 "data_size": 65536 00:09:49.489 }, 00:09:49.489 { 00:09:49.489 "name": "BaseBdev2", 00:09:49.489 "uuid": "beb16e5f-3490-4273-a051-fa2d5c9f89bf", 00:09:49.489 "is_configured": true, 00:09:49.489 "data_offset": 0, 00:09:49.489 "data_size": 65536 00:09:49.489 }, 00:09:49.489 { 00:09:49.489 "name": "BaseBdev3", 00:09:49.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.489 "is_configured": false, 00:09:49.489 "data_offset": 0, 00:09:49.489 "data_size": 0 00:09:49.489 } 00:09:49.489 ] 00:09:49.489 }' 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.489 11:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.058 [2024-11-20 11:19:33.120062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:50.058 [2024-11-20 11:19:33.120119] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:50.058 [2024-11-20 11:19:33.120134] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:50.058 [2024-11-20 11:19:33.120432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:50.058 [2024-11-20 11:19:33.120665] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:50.058 [2024-11-20 11:19:33.120677] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:50.058 [2024-11-20 11:19:33.121015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.058 BaseBdev3 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.058 [ 00:09:50.058 { 00:09:50.058 "name": "BaseBdev3", 00:09:50.058 "aliases": [ 00:09:50.058 "0886fb7f-e7bf-4226-b923-cb80bab08f1a" 00:09:50.058 ], 00:09:50.058 "product_name": "Malloc disk", 00:09:50.058 "block_size": 512, 00:09:50.058 "num_blocks": 65536, 00:09:50.058 "uuid": "0886fb7f-e7bf-4226-b923-cb80bab08f1a", 00:09:50.058 "assigned_rate_limits": { 00:09:50.058 "rw_ios_per_sec": 0, 00:09:50.058 "rw_mbytes_per_sec": 0, 00:09:50.058 "r_mbytes_per_sec": 0, 00:09:50.058 "w_mbytes_per_sec": 0 00:09:50.058 }, 00:09:50.058 "claimed": true, 00:09:50.058 "claim_type": "exclusive_write", 00:09:50.058 "zoned": false, 00:09:50.058 "supported_io_types": { 00:09:50.058 "read": true, 00:09:50.058 "write": true, 00:09:50.058 "unmap": true, 00:09:50.058 "flush": true, 00:09:50.058 "reset": true, 00:09:50.058 "nvme_admin": false, 00:09:50.058 "nvme_io": false, 00:09:50.058 "nvme_io_md": false, 00:09:50.058 "write_zeroes": true, 00:09:50.058 "zcopy": true, 00:09:50.058 "get_zone_info": false, 00:09:50.058 "zone_management": false, 00:09:50.058 "zone_append": false, 00:09:50.058 "compare": false, 00:09:50.058 "compare_and_write": false, 00:09:50.058 "abort": true, 00:09:50.058 "seek_hole": false, 00:09:50.058 "seek_data": false, 00:09:50.058 "copy": true, 00:09:50.058 "nvme_iov_md": false 00:09:50.058 }, 00:09:50.058 "memory_domains": [ 00:09:50.058 { 00:09:50.058 "dma_device_id": "system", 00:09:50.058 "dma_device_type": 1 00:09:50.058 }, 00:09:50.058 { 00:09:50.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.058 "dma_device_type": 2 00:09:50.058 } 00:09:50.058 ], 00:09:50.058 "driver_specific": {} 00:09:50.058 } 00:09:50.058 ] 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.058 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.059 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.059 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.059 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.059 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.059 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.059 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.319 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.319 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.319 "name": "Existed_Raid", 00:09:50.319 "uuid": "bf513e94-89ac-475a-843f-2676cf21ac78", 00:09:50.319 "strip_size_kb": 64, 00:09:50.319 "state": "online", 00:09:50.319 "raid_level": "raid0", 00:09:50.319 "superblock": false, 00:09:50.319 "num_base_bdevs": 3, 00:09:50.319 "num_base_bdevs_discovered": 3, 00:09:50.319 "num_base_bdevs_operational": 3, 00:09:50.319 "base_bdevs_list": [ 00:09:50.319 { 00:09:50.319 "name": "BaseBdev1", 00:09:50.319 "uuid": "92c21401-1f3b-479d-8371-05b9c01f1fea", 00:09:50.319 "is_configured": true, 00:09:50.319 "data_offset": 0, 00:09:50.319 "data_size": 65536 00:09:50.319 }, 00:09:50.319 { 00:09:50.319 "name": "BaseBdev2", 00:09:50.319 "uuid": "beb16e5f-3490-4273-a051-fa2d5c9f89bf", 00:09:50.319 "is_configured": true, 00:09:50.319 "data_offset": 0, 00:09:50.319 "data_size": 65536 00:09:50.319 }, 00:09:50.319 { 00:09:50.319 "name": "BaseBdev3", 00:09:50.319 "uuid": "0886fb7f-e7bf-4226-b923-cb80bab08f1a", 00:09:50.319 "is_configured": true, 00:09:50.319 "data_offset": 0, 00:09:50.319 "data_size": 65536 00:09:50.319 } 00:09:50.319 ] 00:09:50.319 }' 00:09:50.319 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.319 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.579 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:50.579 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:50.579 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:50.579 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:50.579 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:50.579 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:50.579 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:50.579 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:50.579 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.579 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.579 [2024-11-20 11:19:33.651799] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.579 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.579 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:50.579 "name": "Existed_Raid", 00:09:50.579 "aliases": [ 00:09:50.579 "bf513e94-89ac-475a-843f-2676cf21ac78" 00:09:50.579 ], 00:09:50.579 "product_name": "Raid Volume", 00:09:50.579 "block_size": 512, 00:09:50.579 "num_blocks": 196608, 00:09:50.579 "uuid": "bf513e94-89ac-475a-843f-2676cf21ac78", 00:09:50.579 "assigned_rate_limits": { 00:09:50.579 "rw_ios_per_sec": 0, 00:09:50.579 "rw_mbytes_per_sec": 0, 00:09:50.579 "r_mbytes_per_sec": 0, 00:09:50.579 "w_mbytes_per_sec": 0 00:09:50.579 }, 00:09:50.579 "claimed": false, 00:09:50.579 "zoned": false, 00:09:50.579 "supported_io_types": { 00:09:50.579 "read": true, 00:09:50.579 "write": true, 00:09:50.579 "unmap": true, 00:09:50.579 "flush": true, 00:09:50.579 "reset": true, 00:09:50.579 "nvme_admin": false, 00:09:50.579 "nvme_io": false, 00:09:50.579 "nvme_io_md": false, 00:09:50.579 "write_zeroes": true, 00:09:50.579 "zcopy": false, 00:09:50.579 "get_zone_info": false, 00:09:50.579 "zone_management": false, 00:09:50.579 "zone_append": false, 00:09:50.579 "compare": false, 00:09:50.579 "compare_and_write": false, 00:09:50.579 "abort": false, 00:09:50.579 "seek_hole": false, 00:09:50.579 "seek_data": false, 00:09:50.579 "copy": false, 00:09:50.579 "nvme_iov_md": false 00:09:50.579 }, 00:09:50.579 "memory_domains": [ 00:09:50.579 { 00:09:50.579 "dma_device_id": "system", 00:09:50.579 "dma_device_type": 1 00:09:50.579 }, 00:09:50.579 { 00:09:50.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.579 "dma_device_type": 2 00:09:50.579 }, 00:09:50.579 { 00:09:50.579 "dma_device_id": "system", 00:09:50.579 "dma_device_type": 1 00:09:50.579 }, 00:09:50.579 { 00:09:50.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.579 "dma_device_type": 2 00:09:50.579 }, 00:09:50.579 { 00:09:50.579 "dma_device_id": "system", 00:09:50.579 "dma_device_type": 1 00:09:50.579 }, 00:09:50.579 { 00:09:50.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.579 "dma_device_type": 2 00:09:50.579 } 00:09:50.579 ], 00:09:50.579 "driver_specific": { 00:09:50.579 "raid": { 00:09:50.579 "uuid": "bf513e94-89ac-475a-843f-2676cf21ac78", 00:09:50.579 "strip_size_kb": 64, 00:09:50.579 "state": "online", 00:09:50.579 "raid_level": "raid0", 00:09:50.579 "superblock": false, 00:09:50.579 "num_base_bdevs": 3, 00:09:50.579 "num_base_bdevs_discovered": 3, 00:09:50.579 "num_base_bdevs_operational": 3, 00:09:50.579 "base_bdevs_list": [ 00:09:50.579 { 00:09:50.579 "name": "BaseBdev1", 00:09:50.579 "uuid": "92c21401-1f3b-479d-8371-05b9c01f1fea", 00:09:50.579 "is_configured": true, 00:09:50.579 "data_offset": 0, 00:09:50.579 "data_size": 65536 00:09:50.579 }, 00:09:50.579 { 00:09:50.579 "name": "BaseBdev2", 00:09:50.579 "uuid": "beb16e5f-3490-4273-a051-fa2d5c9f89bf", 00:09:50.579 "is_configured": true, 00:09:50.579 "data_offset": 0, 00:09:50.579 "data_size": 65536 00:09:50.579 }, 00:09:50.579 { 00:09:50.579 "name": "BaseBdev3", 00:09:50.579 "uuid": "0886fb7f-e7bf-4226-b923-cb80bab08f1a", 00:09:50.579 "is_configured": true, 00:09:50.579 "data_offset": 0, 00:09:50.579 "data_size": 65536 00:09:50.579 } 00:09:50.579 ] 00:09:50.579 } 00:09:50.579 } 00:09:50.579 }' 00:09:50.579 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:50.840 BaseBdev2 00:09:50.840 BaseBdev3' 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.840 11:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.840 [2024-11-20 11:19:33.950960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:51.099 [2024-11-20 11:19:33.951053] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.099 [2024-11-20 11:19:33.951123] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.099 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.099 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:51.099 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:51.099 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:51.099 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:51.099 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:51.099 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:51.099 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.099 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:51.099 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.099 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.099 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:51.099 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.099 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.099 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.099 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.099 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.099 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.099 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.099 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.099 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.099 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.099 "name": "Existed_Raid", 00:09:51.099 "uuid": "bf513e94-89ac-475a-843f-2676cf21ac78", 00:09:51.099 "strip_size_kb": 64, 00:09:51.099 "state": "offline", 00:09:51.099 "raid_level": "raid0", 00:09:51.099 "superblock": false, 00:09:51.099 "num_base_bdevs": 3, 00:09:51.099 "num_base_bdevs_discovered": 2, 00:09:51.099 "num_base_bdevs_operational": 2, 00:09:51.099 "base_bdevs_list": [ 00:09:51.099 { 00:09:51.099 "name": null, 00:09:51.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.099 "is_configured": false, 00:09:51.099 "data_offset": 0, 00:09:51.099 "data_size": 65536 00:09:51.099 }, 00:09:51.099 { 00:09:51.099 "name": "BaseBdev2", 00:09:51.099 "uuid": "beb16e5f-3490-4273-a051-fa2d5c9f89bf", 00:09:51.099 "is_configured": true, 00:09:51.099 "data_offset": 0, 00:09:51.099 "data_size": 65536 00:09:51.099 }, 00:09:51.099 { 00:09:51.099 "name": "BaseBdev3", 00:09:51.099 "uuid": "0886fb7f-e7bf-4226-b923-cb80bab08f1a", 00:09:51.099 "is_configured": true, 00:09:51.099 "data_offset": 0, 00:09:51.099 "data_size": 65536 00:09:51.099 } 00:09:51.099 ] 00:09:51.099 }' 00:09:51.099 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.099 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.670 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:51.670 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:51.670 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:51.670 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.670 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.670 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.670 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.670 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:51.670 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:51.670 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:51.670 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.670 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.670 [2024-11-20 11:19:34.580860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:51.670 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.670 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:51.670 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:51.670 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.670 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:51.670 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.670 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.670 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.670 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:51.670 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:51.670 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:51.670 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.670 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.670 [2024-11-20 11:19:34.744327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:51.670 [2024-11-20 11:19:34.744393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.930 BaseBdev2 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.930 [ 00:09:51.930 { 00:09:51.930 "name": "BaseBdev2", 00:09:51.930 "aliases": [ 00:09:51.930 "bc84fa44-514f-4f5a-9ac2-a19877611d7a" 00:09:51.930 ], 00:09:51.930 "product_name": "Malloc disk", 00:09:51.930 "block_size": 512, 00:09:51.930 "num_blocks": 65536, 00:09:51.930 "uuid": "bc84fa44-514f-4f5a-9ac2-a19877611d7a", 00:09:51.930 "assigned_rate_limits": { 00:09:51.930 "rw_ios_per_sec": 0, 00:09:51.930 "rw_mbytes_per_sec": 0, 00:09:51.930 "r_mbytes_per_sec": 0, 00:09:51.930 "w_mbytes_per_sec": 0 00:09:51.930 }, 00:09:51.930 "claimed": false, 00:09:51.930 "zoned": false, 00:09:51.930 "supported_io_types": { 00:09:51.930 "read": true, 00:09:51.930 "write": true, 00:09:51.930 "unmap": true, 00:09:51.930 "flush": true, 00:09:51.930 "reset": true, 00:09:51.930 "nvme_admin": false, 00:09:51.930 "nvme_io": false, 00:09:51.930 "nvme_io_md": false, 00:09:51.930 "write_zeroes": true, 00:09:51.930 "zcopy": true, 00:09:51.930 "get_zone_info": false, 00:09:51.930 "zone_management": false, 00:09:51.930 "zone_append": false, 00:09:51.930 "compare": false, 00:09:51.930 "compare_and_write": false, 00:09:51.930 "abort": true, 00:09:51.930 "seek_hole": false, 00:09:51.930 "seek_data": false, 00:09:51.930 "copy": true, 00:09:51.930 "nvme_iov_md": false 00:09:51.930 }, 00:09:51.930 "memory_domains": [ 00:09:51.930 { 00:09:51.930 "dma_device_id": "system", 00:09:51.930 "dma_device_type": 1 00:09:51.930 }, 00:09:51.930 { 00:09:51.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.930 "dma_device_type": 2 00:09:51.930 } 00:09:51.930 ], 00:09:51.930 "driver_specific": {} 00:09:51.930 } 00:09:51.930 ] 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.930 11:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.930 BaseBdev3 00:09:51.930 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.930 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:51.930 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:51.930 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.930 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:51.930 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.930 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.930 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.930 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.930 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.190 [ 00:09:52.190 { 00:09:52.190 "name": "BaseBdev3", 00:09:52.190 "aliases": [ 00:09:52.190 "8495099c-9e5b-4d54-8bcd-3e8d2d77de73" 00:09:52.190 ], 00:09:52.190 "product_name": "Malloc disk", 00:09:52.190 "block_size": 512, 00:09:52.190 "num_blocks": 65536, 00:09:52.190 "uuid": "8495099c-9e5b-4d54-8bcd-3e8d2d77de73", 00:09:52.190 "assigned_rate_limits": { 00:09:52.190 "rw_ios_per_sec": 0, 00:09:52.190 "rw_mbytes_per_sec": 0, 00:09:52.190 "r_mbytes_per_sec": 0, 00:09:52.190 "w_mbytes_per_sec": 0 00:09:52.190 }, 00:09:52.190 "claimed": false, 00:09:52.190 "zoned": false, 00:09:52.190 "supported_io_types": { 00:09:52.190 "read": true, 00:09:52.190 "write": true, 00:09:52.190 "unmap": true, 00:09:52.190 "flush": true, 00:09:52.190 "reset": true, 00:09:52.190 "nvme_admin": false, 00:09:52.190 "nvme_io": false, 00:09:52.190 "nvme_io_md": false, 00:09:52.190 "write_zeroes": true, 00:09:52.190 "zcopy": true, 00:09:52.190 "get_zone_info": false, 00:09:52.190 "zone_management": false, 00:09:52.190 "zone_append": false, 00:09:52.190 "compare": false, 00:09:52.190 "compare_and_write": false, 00:09:52.190 "abort": true, 00:09:52.190 "seek_hole": false, 00:09:52.190 "seek_data": false, 00:09:52.190 "copy": true, 00:09:52.190 "nvme_iov_md": false 00:09:52.190 }, 00:09:52.190 "memory_domains": [ 00:09:52.190 { 00:09:52.190 "dma_device_id": "system", 00:09:52.190 "dma_device_type": 1 00:09:52.190 }, 00:09:52.190 { 00:09:52.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.190 "dma_device_type": 2 00:09:52.190 } 00:09:52.190 ], 00:09:52.190 "driver_specific": {} 00:09:52.190 } 00:09:52.190 ] 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.190 [2024-11-20 11:19:35.087011] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:52.190 [2024-11-20 11:19:35.087118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:52.190 [2024-11-20 11:19:35.087172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:52.190 [2024-11-20 11:19:35.089142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.190 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.190 "name": "Existed_Raid", 00:09:52.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.190 "strip_size_kb": 64, 00:09:52.190 "state": "configuring", 00:09:52.190 "raid_level": "raid0", 00:09:52.190 "superblock": false, 00:09:52.190 "num_base_bdevs": 3, 00:09:52.190 "num_base_bdevs_discovered": 2, 00:09:52.191 "num_base_bdevs_operational": 3, 00:09:52.191 "base_bdevs_list": [ 00:09:52.191 { 00:09:52.191 "name": "BaseBdev1", 00:09:52.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.191 "is_configured": false, 00:09:52.191 "data_offset": 0, 00:09:52.191 "data_size": 0 00:09:52.191 }, 00:09:52.191 { 00:09:52.191 "name": "BaseBdev2", 00:09:52.191 "uuid": "bc84fa44-514f-4f5a-9ac2-a19877611d7a", 00:09:52.191 "is_configured": true, 00:09:52.191 "data_offset": 0, 00:09:52.191 "data_size": 65536 00:09:52.191 }, 00:09:52.191 { 00:09:52.191 "name": "BaseBdev3", 00:09:52.191 "uuid": "8495099c-9e5b-4d54-8bcd-3e8d2d77de73", 00:09:52.191 "is_configured": true, 00:09:52.191 "data_offset": 0, 00:09:52.191 "data_size": 65536 00:09:52.191 } 00:09:52.191 ] 00:09:52.191 }' 00:09:52.191 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.191 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.451 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:52.451 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.451 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.451 [2024-11-20 11:19:35.522327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:52.451 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.451 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:52.451 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.451 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.451 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.451 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.451 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.451 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.451 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.451 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.451 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.451 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.451 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.451 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.451 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.451 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.709 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.709 "name": "Existed_Raid", 00:09:52.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.709 "strip_size_kb": 64, 00:09:52.709 "state": "configuring", 00:09:52.710 "raid_level": "raid0", 00:09:52.710 "superblock": false, 00:09:52.710 "num_base_bdevs": 3, 00:09:52.710 "num_base_bdevs_discovered": 1, 00:09:52.710 "num_base_bdevs_operational": 3, 00:09:52.710 "base_bdevs_list": [ 00:09:52.710 { 00:09:52.710 "name": "BaseBdev1", 00:09:52.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.710 "is_configured": false, 00:09:52.710 "data_offset": 0, 00:09:52.710 "data_size": 0 00:09:52.710 }, 00:09:52.710 { 00:09:52.710 "name": null, 00:09:52.710 "uuid": "bc84fa44-514f-4f5a-9ac2-a19877611d7a", 00:09:52.710 "is_configured": false, 00:09:52.710 "data_offset": 0, 00:09:52.710 "data_size": 65536 00:09:52.710 }, 00:09:52.710 { 00:09:52.710 "name": "BaseBdev3", 00:09:52.710 "uuid": "8495099c-9e5b-4d54-8bcd-3e8d2d77de73", 00:09:52.710 "is_configured": true, 00:09:52.710 "data_offset": 0, 00:09:52.710 "data_size": 65536 00:09:52.710 } 00:09:52.710 ] 00:09:52.710 }' 00:09:52.710 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.710 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.970 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:52.970 11:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.970 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.970 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.970 11:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.970 [2024-11-20 11:19:36.051929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:52.970 BaseBdev1 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.970 [ 00:09:52.970 { 00:09:52.970 "name": "BaseBdev1", 00:09:52.970 "aliases": [ 00:09:52.970 "b345568a-e992-4aad-8bd4-f0c91b2f5b1a" 00:09:52.970 ], 00:09:52.970 "product_name": "Malloc disk", 00:09:52.970 "block_size": 512, 00:09:52.970 "num_blocks": 65536, 00:09:52.970 "uuid": "b345568a-e992-4aad-8bd4-f0c91b2f5b1a", 00:09:52.970 "assigned_rate_limits": { 00:09:52.970 "rw_ios_per_sec": 0, 00:09:52.970 "rw_mbytes_per_sec": 0, 00:09:52.970 "r_mbytes_per_sec": 0, 00:09:52.970 "w_mbytes_per_sec": 0 00:09:52.970 }, 00:09:52.970 "claimed": true, 00:09:52.970 "claim_type": "exclusive_write", 00:09:52.970 "zoned": false, 00:09:52.970 "supported_io_types": { 00:09:52.970 "read": true, 00:09:52.970 "write": true, 00:09:52.970 "unmap": true, 00:09:52.970 "flush": true, 00:09:52.970 "reset": true, 00:09:52.970 "nvme_admin": false, 00:09:52.970 "nvme_io": false, 00:09:52.970 "nvme_io_md": false, 00:09:52.970 "write_zeroes": true, 00:09:52.970 "zcopy": true, 00:09:52.970 "get_zone_info": false, 00:09:52.970 "zone_management": false, 00:09:52.970 "zone_append": false, 00:09:52.970 "compare": false, 00:09:52.970 "compare_and_write": false, 00:09:52.970 "abort": true, 00:09:52.970 "seek_hole": false, 00:09:52.970 "seek_data": false, 00:09:52.970 "copy": true, 00:09:52.970 "nvme_iov_md": false 00:09:52.970 }, 00:09:52.970 "memory_domains": [ 00:09:52.970 { 00:09:52.970 "dma_device_id": "system", 00:09:52.970 "dma_device_type": 1 00:09:52.970 }, 00:09:52.970 { 00:09:52.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.970 "dma_device_type": 2 00:09:52.970 } 00:09:52.970 ], 00:09:52.970 "driver_specific": {} 00:09:52.970 } 00:09:52.970 ] 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.970 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.230 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.230 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.230 "name": "Existed_Raid", 00:09:53.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.230 "strip_size_kb": 64, 00:09:53.230 "state": "configuring", 00:09:53.230 "raid_level": "raid0", 00:09:53.230 "superblock": false, 00:09:53.230 "num_base_bdevs": 3, 00:09:53.230 "num_base_bdevs_discovered": 2, 00:09:53.230 "num_base_bdevs_operational": 3, 00:09:53.230 "base_bdevs_list": [ 00:09:53.230 { 00:09:53.230 "name": "BaseBdev1", 00:09:53.230 "uuid": "b345568a-e992-4aad-8bd4-f0c91b2f5b1a", 00:09:53.230 "is_configured": true, 00:09:53.230 "data_offset": 0, 00:09:53.230 "data_size": 65536 00:09:53.230 }, 00:09:53.230 { 00:09:53.230 "name": null, 00:09:53.230 "uuid": "bc84fa44-514f-4f5a-9ac2-a19877611d7a", 00:09:53.230 "is_configured": false, 00:09:53.230 "data_offset": 0, 00:09:53.231 "data_size": 65536 00:09:53.231 }, 00:09:53.231 { 00:09:53.231 "name": "BaseBdev3", 00:09:53.231 "uuid": "8495099c-9e5b-4d54-8bcd-3e8d2d77de73", 00:09:53.231 "is_configured": true, 00:09:53.231 "data_offset": 0, 00:09:53.231 "data_size": 65536 00:09:53.231 } 00:09:53.231 ] 00:09:53.231 }' 00:09:53.231 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.231 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.491 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.491 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:53.491 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.491 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.491 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.491 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:53.491 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:53.491 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.491 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.491 [2024-11-20 11:19:36.567294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:53.491 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.491 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:53.491 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.491 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.491 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.491 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.491 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.491 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.491 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.491 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.491 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.491 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.491 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.491 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.491 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.491 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.750 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.750 "name": "Existed_Raid", 00:09:53.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.750 "strip_size_kb": 64, 00:09:53.750 "state": "configuring", 00:09:53.750 "raid_level": "raid0", 00:09:53.750 "superblock": false, 00:09:53.750 "num_base_bdevs": 3, 00:09:53.750 "num_base_bdevs_discovered": 1, 00:09:53.750 "num_base_bdevs_operational": 3, 00:09:53.750 "base_bdevs_list": [ 00:09:53.750 { 00:09:53.750 "name": "BaseBdev1", 00:09:53.750 "uuid": "b345568a-e992-4aad-8bd4-f0c91b2f5b1a", 00:09:53.750 "is_configured": true, 00:09:53.750 "data_offset": 0, 00:09:53.750 "data_size": 65536 00:09:53.750 }, 00:09:53.750 { 00:09:53.750 "name": null, 00:09:53.750 "uuid": "bc84fa44-514f-4f5a-9ac2-a19877611d7a", 00:09:53.750 "is_configured": false, 00:09:53.750 "data_offset": 0, 00:09:53.750 "data_size": 65536 00:09:53.750 }, 00:09:53.750 { 00:09:53.750 "name": null, 00:09:53.750 "uuid": "8495099c-9e5b-4d54-8bcd-3e8d2d77de73", 00:09:53.750 "is_configured": false, 00:09:53.750 "data_offset": 0, 00:09:53.750 "data_size": 65536 00:09:53.750 } 00:09:53.750 ] 00:09:53.750 }' 00:09:53.751 11:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.751 11:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.010 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.011 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:54.011 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.011 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.011 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.011 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:54.011 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:54.011 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.011 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.011 [2024-11-20 11:19:37.078541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:54.011 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.011 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:54.011 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.011 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.011 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.011 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.011 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.011 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.011 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.011 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.011 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.011 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.011 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.011 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.011 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.011 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.270 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.270 "name": "Existed_Raid", 00:09:54.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.270 "strip_size_kb": 64, 00:09:54.270 "state": "configuring", 00:09:54.270 "raid_level": "raid0", 00:09:54.270 "superblock": false, 00:09:54.270 "num_base_bdevs": 3, 00:09:54.270 "num_base_bdevs_discovered": 2, 00:09:54.270 "num_base_bdevs_operational": 3, 00:09:54.270 "base_bdevs_list": [ 00:09:54.270 { 00:09:54.270 "name": "BaseBdev1", 00:09:54.270 "uuid": "b345568a-e992-4aad-8bd4-f0c91b2f5b1a", 00:09:54.270 "is_configured": true, 00:09:54.270 "data_offset": 0, 00:09:54.270 "data_size": 65536 00:09:54.270 }, 00:09:54.270 { 00:09:54.270 "name": null, 00:09:54.270 "uuid": "bc84fa44-514f-4f5a-9ac2-a19877611d7a", 00:09:54.270 "is_configured": false, 00:09:54.270 "data_offset": 0, 00:09:54.270 "data_size": 65536 00:09:54.270 }, 00:09:54.270 { 00:09:54.270 "name": "BaseBdev3", 00:09:54.270 "uuid": "8495099c-9e5b-4d54-8bcd-3e8d2d77de73", 00:09:54.270 "is_configured": true, 00:09:54.270 "data_offset": 0, 00:09:54.270 "data_size": 65536 00:09:54.270 } 00:09:54.270 ] 00:09:54.270 }' 00:09:54.270 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.270 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.530 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:54.530 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.530 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.530 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.530 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.530 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:54.530 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:54.530 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.530 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.530 [2024-11-20 11:19:37.577678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:54.790 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.790 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:54.790 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.790 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.790 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.790 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.790 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.790 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.790 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.790 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.790 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.790 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.790 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.790 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.790 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.790 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.790 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.790 "name": "Existed_Raid", 00:09:54.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.790 "strip_size_kb": 64, 00:09:54.790 "state": "configuring", 00:09:54.790 "raid_level": "raid0", 00:09:54.790 "superblock": false, 00:09:54.790 "num_base_bdevs": 3, 00:09:54.790 "num_base_bdevs_discovered": 1, 00:09:54.790 "num_base_bdevs_operational": 3, 00:09:54.790 "base_bdevs_list": [ 00:09:54.790 { 00:09:54.790 "name": null, 00:09:54.790 "uuid": "b345568a-e992-4aad-8bd4-f0c91b2f5b1a", 00:09:54.790 "is_configured": false, 00:09:54.790 "data_offset": 0, 00:09:54.790 "data_size": 65536 00:09:54.790 }, 00:09:54.790 { 00:09:54.790 "name": null, 00:09:54.790 "uuid": "bc84fa44-514f-4f5a-9ac2-a19877611d7a", 00:09:54.790 "is_configured": false, 00:09:54.790 "data_offset": 0, 00:09:54.790 "data_size": 65536 00:09:54.790 }, 00:09:54.790 { 00:09:54.790 "name": "BaseBdev3", 00:09:54.790 "uuid": "8495099c-9e5b-4d54-8bcd-3e8d2d77de73", 00:09:54.790 "is_configured": true, 00:09:54.790 "data_offset": 0, 00:09:54.790 "data_size": 65536 00:09:54.790 } 00:09:54.790 ] 00:09:54.790 }' 00:09:54.790 11:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.790 11:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.050 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.050 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.050 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.050 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:55.050 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.322 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:55.322 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:55.322 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.322 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.322 [2024-11-20 11:19:38.192364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:55.322 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.322 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:55.322 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.322 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.322 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.322 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.322 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.322 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.322 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.322 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.322 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.322 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.322 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.322 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.322 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.322 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.322 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.322 "name": "Existed_Raid", 00:09:55.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.322 "strip_size_kb": 64, 00:09:55.322 "state": "configuring", 00:09:55.322 "raid_level": "raid0", 00:09:55.322 "superblock": false, 00:09:55.322 "num_base_bdevs": 3, 00:09:55.322 "num_base_bdevs_discovered": 2, 00:09:55.322 "num_base_bdevs_operational": 3, 00:09:55.322 "base_bdevs_list": [ 00:09:55.322 { 00:09:55.322 "name": null, 00:09:55.322 "uuid": "b345568a-e992-4aad-8bd4-f0c91b2f5b1a", 00:09:55.322 "is_configured": false, 00:09:55.322 "data_offset": 0, 00:09:55.323 "data_size": 65536 00:09:55.323 }, 00:09:55.323 { 00:09:55.323 "name": "BaseBdev2", 00:09:55.323 "uuid": "bc84fa44-514f-4f5a-9ac2-a19877611d7a", 00:09:55.323 "is_configured": true, 00:09:55.323 "data_offset": 0, 00:09:55.323 "data_size": 65536 00:09:55.323 }, 00:09:55.323 { 00:09:55.323 "name": "BaseBdev3", 00:09:55.323 "uuid": "8495099c-9e5b-4d54-8bcd-3e8d2d77de73", 00:09:55.323 "is_configured": true, 00:09:55.323 "data_offset": 0, 00:09:55.323 "data_size": 65536 00:09:55.323 } 00:09:55.323 ] 00:09:55.323 }' 00:09:55.323 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.323 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.623 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:55.623 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.623 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.623 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.623 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.624 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:55.624 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.624 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:55.624 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.624 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.624 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.624 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b345568a-e992-4aad-8bd4-f0c91b2f5b1a 00:09:55.624 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.624 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.884 [2024-11-20 11:19:38.770698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:55.884 [2024-11-20 11:19:38.770752] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:55.884 [2024-11-20 11:19:38.770761] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:55.884 [2024-11-20 11:19:38.771006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:55.884 [2024-11-20 11:19:38.771157] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:55.884 [2024-11-20 11:19:38.771166] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:55.884 [2024-11-20 11:19:38.771427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.884 NewBaseBdev 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.884 [ 00:09:55.884 { 00:09:55.884 "name": "NewBaseBdev", 00:09:55.884 "aliases": [ 00:09:55.884 "b345568a-e992-4aad-8bd4-f0c91b2f5b1a" 00:09:55.884 ], 00:09:55.884 "product_name": "Malloc disk", 00:09:55.884 "block_size": 512, 00:09:55.884 "num_blocks": 65536, 00:09:55.884 "uuid": "b345568a-e992-4aad-8bd4-f0c91b2f5b1a", 00:09:55.884 "assigned_rate_limits": { 00:09:55.884 "rw_ios_per_sec": 0, 00:09:55.884 "rw_mbytes_per_sec": 0, 00:09:55.884 "r_mbytes_per_sec": 0, 00:09:55.884 "w_mbytes_per_sec": 0 00:09:55.884 }, 00:09:55.884 "claimed": true, 00:09:55.884 "claim_type": "exclusive_write", 00:09:55.884 "zoned": false, 00:09:55.884 "supported_io_types": { 00:09:55.884 "read": true, 00:09:55.884 "write": true, 00:09:55.884 "unmap": true, 00:09:55.884 "flush": true, 00:09:55.884 "reset": true, 00:09:55.884 "nvme_admin": false, 00:09:55.884 "nvme_io": false, 00:09:55.884 "nvme_io_md": false, 00:09:55.884 "write_zeroes": true, 00:09:55.884 "zcopy": true, 00:09:55.884 "get_zone_info": false, 00:09:55.884 "zone_management": false, 00:09:55.884 "zone_append": false, 00:09:55.884 "compare": false, 00:09:55.884 "compare_and_write": false, 00:09:55.884 "abort": true, 00:09:55.884 "seek_hole": false, 00:09:55.884 "seek_data": false, 00:09:55.884 "copy": true, 00:09:55.884 "nvme_iov_md": false 00:09:55.884 }, 00:09:55.884 "memory_domains": [ 00:09:55.884 { 00:09:55.884 "dma_device_id": "system", 00:09:55.884 "dma_device_type": 1 00:09:55.884 }, 00:09:55.884 { 00:09:55.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.884 "dma_device_type": 2 00:09:55.884 } 00:09:55.884 ], 00:09:55.884 "driver_specific": {} 00:09:55.884 } 00:09:55.884 ] 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.884 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.884 "name": "Existed_Raid", 00:09:55.884 "uuid": "ead86e0d-ef67-42a6-bc79-dfa40ff36580", 00:09:55.884 "strip_size_kb": 64, 00:09:55.884 "state": "online", 00:09:55.884 "raid_level": "raid0", 00:09:55.884 "superblock": false, 00:09:55.884 "num_base_bdevs": 3, 00:09:55.884 "num_base_bdevs_discovered": 3, 00:09:55.884 "num_base_bdevs_operational": 3, 00:09:55.884 "base_bdevs_list": [ 00:09:55.884 { 00:09:55.884 "name": "NewBaseBdev", 00:09:55.884 "uuid": "b345568a-e992-4aad-8bd4-f0c91b2f5b1a", 00:09:55.884 "is_configured": true, 00:09:55.884 "data_offset": 0, 00:09:55.884 "data_size": 65536 00:09:55.884 }, 00:09:55.884 { 00:09:55.884 "name": "BaseBdev2", 00:09:55.884 "uuid": "bc84fa44-514f-4f5a-9ac2-a19877611d7a", 00:09:55.884 "is_configured": true, 00:09:55.884 "data_offset": 0, 00:09:55.884 "data_size": 65536 00:09:55.884 }, 00:09:55.884 { 00:09:55.884 "name": "BaseBdev3", 00:09:55.884 "uuid": "8495099c-9e5b-4d54-8bcd-3e8d2d77de73", 00:09:55.884 "is_configured": true, 00:09:55.884 "data_offset": 0, 00:09:55.884 "data_size": 65536 00:09:55.884 } 00:09:55.884 ] 00:09:55.884 }' 00:09:55.885 11:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.885 11:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.454 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:56.454 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:56.454 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:56.454 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:56.454 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:56.454 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:56.454 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:56.454 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:56.454 11:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.454 11:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.454 [2024-11-20 11:19:39.314177] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.454 11:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.454 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:56.454 "name": "Existed_Raid", 00:09:56.454 "aliases": [ 00:09:56.454 "ead86e0d-ef67-42a6-bc79-dfa40ff36580" 00:09:56.454 ], 00:09:56.454 "product_name": "Raid Volume", 00:09:56.454 "block_size": 512, 00:09:56.454 "num_blocks": 196608, 00:09:56.454 "uuid": "ead86e0d-ef67-42a6-bc79-dfa40ff36580", 00:09:56.454 "assigned_rate_limits": { 00:09:56.454 "rw_ios_per_sec": 0, 00:09:56.454 "rw_mbytes_per_sec": 0, 00:09:56.454 "r_mbytes_per_sec": 0, 00:09:56.454 "w_mbytes_per_sec": 0 00:09:56.454 }, 00:09:56.454 "claimed": false, 00:09:56.454 "zoned": false, 00:09:56.454 "supported_io_types": { 00:09:56.454 "read": true, 00:09:56.454 "write": true, 00:09:56.454 "unmap": true, 00:09:56.454 "flush": true, 00:09:56.454 "reset": true, 00:09:56.454 "nvme_admin": false, 00:09:56.454 "nvme_io": false, 00:09:56.454 "nvme_io_md": false, 00:09:56.454 "write_zeroes": true, 00:09:56.454 "zcopy": false, 00:09:56.454 "get_zone_info": false, 00:09:56.454 "zone_management": false, 00:09:56.454 "zone_append": false, 00:09:56.455 "compare": false, 00:09:56.455 "compare_and_write": false, 00:09:56.455 "abort": false, 00:09:56.455 "seek_hole": false, 00:09:56.455 "seek_data": false, 00:09:56.455 "copy": false, 00:09:56.455 "nvme_iov_md": false 00:09:56.455 }, 00:09:56.455 "memory_domains": [ 00:09:56.455 { 00:09:56.455 "dma_device_id": "system", 00:09:56.455 "dma_device_type": 1 00:09:56.455 }, 00:09:56.455 { 00:09:56.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.455 "dma_device_type": 2 00:09:56.455 }, 00:09:56.455 { 00:09:56.455 "dma_device_id": "system", 00:09:56.455 "dma_device_type": 1 00:09:56.455 }, 00:09:56.455 { 00:09:56.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.455 "dma_device_type": 2 00:09:56.455 }, 00:09:56.455 { 00:09:56.455 "dma_device_id": "system", 00:09:56.455 "dma_device_type": 1 00:09:56.455 }, 00:09:56.455 { 00:09:56.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.455 "dma_device_type": 2 00:09:56.455 } 00:09:56.455 ], 00:09:56.455 "driver_specific": { 00:09:56.455 "raid": { 00:09:56.455 "uuid": "ead86e0d-ef67-42a6-bc79-dfa40ff36580", 00:09:56.455 "strip_size_kb": 64, 00:09:56.455 "state": "online", 00:09:56.455 "raid_level": "raid0", 00:09:56.455 "superblock": false, 00:09:56.455 "num_base_bdevs": 3, 00:09:56.455 "num_base_bdevs_discovered": 3, 00:09:56.455 "num_base_bdevs_operational": 3, 00:09:56.455 "base_bdevs_list": [ 00:09:56.455 { 00:09:56.455 "name": "NewBaseBdev", 00:09:56.455 "uuid": "b345568a-e992-4aad-8bd4-f0c91b2f5b1a", 00:09:56.455 "is_configured": true, 00:09:56.455 "data_offset": 0, 00:09:56.455 "data_size": 65536 00:09:56.455 }, 00:09:56.455 { 00:09:56.455 "name": "BaseBdev2", 00:09:56.455 "uuid": "bc84fa44-514f-4f5a-9ac2-a19877611d7a", 00:09:56.455 "is_configured": true, 00:09:56.455 "data_offset": 0, 00:09:56.455 "data_size": 65536 00:09:56.455 }, 00:09:56.455 { 00:09:56.455 "name": "BaseBdev3", 00:09:56.455 "uuid": "8495099c-9e5b-4d54-8bcd-3e8d2d77de73", 00:09:56.455 "is_configured": true, 00:09:56.455 "data_offset": 0, 00:09:56.455 "data_size": 65536 00:09:56.455 } 00:09:56.455 ] 00:09:56.455 } 00:09:56.455 } 00:09:56.455 }' 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:56.455 BaseBdev2 00:09:56.455 BaseBdev3' 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.455 [2024-11-20 11:19:39.561474] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:56.455 [2024-11-20 11:19:39.561508] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:56.455 [2024-11-20 11:19:39.561600] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.455 [2024-11-20 11:19:39.561656] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:56.455 [2024-11-20 11:19:39.561668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63936 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63936 ']' 00:09:56.455 11:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63936 00:09:56.715 11:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:56.715 11:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.715 11:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63936 00:09:56.715 killing process with pid 63936 00:09:56.715 11:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.715 11:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.715 11:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63936' 00:09:56.715 11:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63936 00:09:56.715 [2024-11-20 11:19:39.610487] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:56.715 11:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63936 00:09:56.975 [2024-11-20 11:19:39.922812] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:58.356 11:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:58.356 00:09:58.356 real 0m11.093s 00:09:58.356 user 0m17.615s 00:09:58.356 sys 0m1.866s 00:09:58.356 11:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.356 11:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.356 ************************************ 00:09:58.356 END TEST raid_state_function_test 00:09:58.356 ************************************ 00:09:58.356 11:19:41 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:58.356 11:19:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:58.356 11:19:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.356 11:19:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:58.356 ************************************ 00:09:58.356 START TEST raid_state_function_test_sb 00:09:58.356 ************************************ 00:09:58.356 11:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:09:58.356 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:58.356 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:58.356 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:58.356 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:58.356 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:58.356 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:58.356 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:58.356 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:58.356 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:58.356 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:58.356 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:58.356 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:58.356 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:58.356 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:58.356 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:58.356 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:58.356 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:58.356 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:58.356 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:58.356 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:58.356 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:58.357 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:58.357 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:58.357 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:58.357 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:58.357 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:58.357 Process raid pid: 64563 00:09:58.357 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64563 00:09:58.357 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:58.357 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64563' 00:09:58.357 11:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64563 00:09:58.357 11:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64563 ']' 00:09:58.357 11:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.357 11:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.357 11:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.357 11:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.357 11:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.357 [2024-11-20 11:19:41.305215] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:09:58.357 [2024-11-20 11:19:41.305427] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.617 [2024-11-20 11:19:41.484315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.617 [2024-11-20 11:19:41.603150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.876 [2024-11-20 11:19:41.822991] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.876 [2024-11-20 11:19:41.823152] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.136 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.136 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:59.136 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:59.136 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.136 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.136 [2024-11-20 11:19:42.198511] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:59.136 [2024-11-20 11:19:42.198628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:59.136 [2024-11-20 11:19:42.198664] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.136 [2024-11-20 11:19:42.198693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.136 [2024-11-20 11:19:42.198715] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:59.136 [2024-11-20 11:19:42.198741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:59.136 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.136 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:59.136 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.136 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.136 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.136 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.136 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.136 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.136 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.136 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.136 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.136 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.136 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.136 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.136 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.136 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.395 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.395 "name": "Existed_Raid", 00:09:59.395 "uuid": "e0091e83-abd5-420f-8efe-bc6d1def7a03", 00:09:59.395 "strip_size_kb": 64, 00:09:59.395 "state": "configuring", 00:09:59.395 "raid_level": "raid0", 00:09:59.395 "superblock": true, 00:09:59.395 "num_base_bdevs": 3, 00:09:59.395 "num_base_bdevs_discovered": 0, 00:09:59.395 "num_base_bdevs_operational": 3, 00:09:59.395 "base_bdevs_list": [ 00:09:59.395 { 00:09:59.395 "name": "BaseBdev1", 00:09:59.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.395 "is_configured": false, 00:09:59.395 "data_offset": 0, 00:09:59.395 "data_size": 0 00:09:59.395 }, 00:09:59.395 { 00:09:59.395 "name": "BaseBdev2", 00:09:59.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.395 "is_configured": false, 00:09:59.395 "data_offset": 0, 00:09:59.395 "data_size": 0 00:09:59.395 }, 00:09:59.395 { 00:09:59.395 "name": "BaseBdev3", 00:09:59.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.395 "is_configured": false, 00:09:59.395 "data_offset": 0, 00:09:59.395 "data_size": 0 00:09:59.395 } 00:09:59.395 ] 00:09:59.395 }' 00:09:59.395 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.395 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.655 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.655 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.655 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.655 [2024-11-20 11:19:42.685595] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.655 [2024-11-20 11:19:42.685634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:59.655 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.655 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:59.655 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.655 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.655 [2024-11-20 11:19:42.697601] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:59.655 [2024-11-20 11:19:42.697652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:59.655 [2024-11-20 11:19:42.697662] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.655 [2024-11-20 11:19:42.697673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.655 [2024-11-20 11:19:42.697681] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:59.655 [2024-11-20 11:19:42.697691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:59.655 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.655 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:59.655 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.655 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.655 [2024-11-20 11:19:42.747099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.655 BaseBdev1 00:09:59.655 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.655 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:59.655 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:59.655 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.655 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:59.655 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.655 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.655 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.655 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.655 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.655 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.655 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:59.655 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.655 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.915 [ 00:09:59.915 { 00:09:59.915 "name": "BaseBdev1", 00:09:59.915 "aliases": [ 00:09:59.915 "6687777d-6a29-4c24-97e8-520c63c42286" 00:09:59.915 ], 00:09:59.915 "product_name": "Malloc disk", 00:09:59.915 "block_size": 512, 00:09:59.915 "num_blocks": 65536, 00:09:59.915 "uuid": "6687777d-6a29-4c24-97e8-520c63c42286", 00:09:59.915 "assigned_rate_limits": { 00:09:59.915 "rw_ios_per_sec": 0, 00:09:59.915 "rw_mbytes_per_sec": 0, 00:09:59.915 "r_mbytes_per_sec": 0, 00:09:59.915 "w_mbytes_per_sec": 0 00:09:59.915 }, 00:09:59.915 "claimed": true, 00:09:59.915 "claim_type": "exclusive_write", 00:09:59.915 "zoned": false, 00:09:59.915 "supported_io_types": { 00:09:59.915 "read": true, 00:09:59.915 "write": true, 00:09:59.915 "unmap": true, 00:09:59.915 "flush": true, 00:09:59.915 "reset": true, 00:09:59.915 "nvme_admin": false, 00:09:59.915 "nvme_io": false, 00:09:59.915 "nvme_io_md": false, 00:09:59.915 "write_zeroes": true, 00:09:59.915 "zcopy": true, 00:09:59.915 "get_zone_info": false, 00:09:59.915 "zone_management": false, 00:09:59.915 "zone_append": false, 00:09:59.915 "compare": false, 00:09:59.915 "compare_and_write": false, 00:09:59.915 "abort": true, 00:09:59.915 "seek_hole": false, 00:09:59.915 "seek_data": false, 00:09:59.915 "copy": true, 00:09:59.915 "nvme_iov_md": false 00:09:59.915 }, 00:09:59.915 "memory_domains": [ 00:09:59.915 { 00:09:59.915 "dma_device_id": "system", 00:09:59.915 "dma_device_type": 1 00:09:59.915 }, 00:09:59.915 { 00:09:59.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.915 "dma_device_type": 2 00:09:59.915 } 00:09:59.915 ], 00:09:59.915 "driver_specific": {} 00:09:59.915 } 00:09:59.915 ] 00:09:59.915 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.915 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:59.915 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:59.915 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.915 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.915 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.915 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.915 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.915 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.915 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.915 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.915 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.916 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.916 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.916 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.916 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.916 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.916 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.916 "name": "Existed_Raid", 00:09:59.916 "uuid": "fbfc0f4d-3520-4e85-83b2-94afc977e47a", 00:09:59.916 "strip_size_kb": 64, 00:09:59.916 "state": "configuring", 00:09:59.916 "raid_level": "raid0", 00:09:59.916 "superblock": true, 00:09:59.916 "num_base_bdevs": 3, 00:09:59.916 "num_base_bdevs_discovered": 1, 00:09:59.916 "num_base_bdevs_operational": 3, 00:09:59.916 "base_bdevs_list": [ 00:09:59.916 { 00:09:59.916 "name": "BaseBdev1", 00:09:59.916 "uuid": "6687777d-6a29-4c24-97e8-520c63c42286", 00:09:59.916 "is_configured": true, 00:09:59.916 "data_offset": 2048, 00:09:59.916 "data_size": 63488 00:09:59.916 }, 00:09:59.916 { 00:09:59.916 "name": "BaseBdev2", 00:09:59.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.916 "is_configured": false, 00:09:59.916 "data_offset": 0, 00:09:59.916 "data_size": 0 00:09:59.916 }, 00:09:59.916 { 00:09:59.916 "name": "BaseBdev3", 00:09:59.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.916 "is_configured": false, 00:09:59.916 "data_offset": 0, 00:09:59.916 "data_size": 0 00:09:59.916 } 00:09:59.916 ] 00:09:59.916 }' 00:09:59.916 11:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.916 11:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.174 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:00.174 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.174 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.174 [2024-11-20 11:19:43.266299] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:00.174 [2024-11-20 11:19:43.266355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:00.174 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.174 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:00.174 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.174 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.174 [2024-11-20 11:19:43.278335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:00.174 [2024-11-20 11:19:43.280353] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:00.174 [2024-11-20 11:19:43.280398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:00.174 [2024-11-20 11:19:43.280409] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:00.174 [2024-11-20 11:19:43.280417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:00.174 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.174 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:00.174 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.174 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:00.174 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.174 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.174 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.174 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.174 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.174 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.174 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.434 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.434 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.434 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.434 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.434 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.434 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.434 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.434 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.434 "name": "Existed_Raid", 00:10:00.434 "uuid": "eef718b4-7b91-4260-b32b-1555b14a6217", 00:10:00.434 "strip_size_kb": 64, 00:10:00.434 "state": "configuring", 00:10:00.434 "raid_level": "raid0", 00:10:00.434 "superblock": true, 00:10:00.434 "num_base_bdevs": 3, 00:10:00.434 "num_base_bdevs_discovered": 1, 00:10:00.434 "num_base_bdevs_operational": 3, 00:10:00.434 "base_bdevs_list": [ 00:10:00.434 { 00:10:00.434 "name": "BaseBdev1", 00:10:00.434 "uuid": "6687777d-6a29-4c24-97e8-520c63c42286", 00:10:00.434 "is_configured": true, 00:10:00.434 "data_offset": 2048, 00:10:00.434 "data_size": 63488 00:10:00.434 }, 00:10:00.434 { 00:10:00.434 "name": "BaseBdev2", 00:10:00.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.434 "is_configured": false, 00:10:00.434 "data_offset": 0, 00:10:00.434 "data_size": 0 00:10:00.434 }, 00:10:00.434 { 00:10:00.434 "name": "BaseBdev3", 00:10:00.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.434 "is_configured": false, 00:10:00.434 "data_offset": 0, 00:10:00.434 "data_size": 0 00:10:00.434 } 00:10:00.434 ] 00:10:00.434 }' 00:10:00.434 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.434 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.693 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:00.693 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.693 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.952 [2024-11-20 11:19:43.809967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.952 BaseBdev2 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.952 [ 00:10:00.952 { 00:10:00.952 "name": "BaseBdev2", 00:10:00.952 "aliases": [ 00:10:00.952 "b82ef1fd-00ed-45ed-90a9-011505715700" 00:10:00.952 ], 00:10:00.952 "product_name": "Malloc disk", 00:10:00.952 "block_size": 512, 00:10:00.952 "num_blocks": 65536, 00:10:00.952 "uuid": "b82ef1fd-00ed-45ed-90a9-011505715700", 00:10:00.952 "assigned_rate_limits": { 00:10:00.952 "rw_ios_per_sec": 0, 00:10:00.952 "rw_mbytes_per_sec": 0, 00:10:00.952 "r_mbytes_per_sec": 0, 00:10:00.952 "w_mbytes_per_sec": 0 00:10:00.952 }, 00:10:00.952 "claimed": true, 00:10:00.952 "claim_type": "exclusive_write", 00:10:00.952 "zoned": false, 00:10:00.952 "supported_io_types": { 00:10:00.952 "read": true, 00:10:00.952 "write": true, 00:10:00.952 "unmap": true, 00:10:00.952 "flush": true, 00:10:00.952 "reset": true, 00:10:00.952 "nvme_admin": false, 00:10:00.952 "nvme_io": false, 00:10:00.952 "nvme_io_md": false, 00:10:00.952 "write_zeroes": true, 00:10:00.952 "zcopy": true, 00:10:00.952 "get_zone_info": false, 00:10:00.952 "zone_management": false, 00:10:00.952 "zone_append": false, 00:10:00.952 "compare": false, 00:10:00.952 "compare_and_write": false, 00:10:00.952 "abort": true, 00:10:00.952 "seek_hole": false, 00:10:00.952 "seek_data": false, 00:10:00.952 "copy": true, 00:10:00.952 "nvme_iov_md": false 00:10:00.952 }, 00:10:00.952 "memory_domains": [ 00:10:00.952 { 00:10:00.952 "dma_device_id": "system", 00:10:00.952 "dma_device_type": 1 00:10:00.952 }, 00:10:00.952 { 00:10:00.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.952 "dma_device_type": 2 00:10:00.952 } 00:10:00.952 ], 00:10:00.952 "driver_specific": {} 00:10:00.952 } 00:10:00.952 ] 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.952 "name": "Existed_Raid", 00:10:00.952 "uuid": "eef718b4-7b91-4260-b32b-1555b14a6217", 00:10:00.952 "strip_size_kb": 64, 00:10:00.952 "state": "configuring", 00:10:00.952 "raid_level": "raid0", 00:10:00.952 "superblock": true, 00:10:00.952 "num_base_bdevs": 3, 00:10:00.952 "num_base_bdevs_discovered": 2, 00:10:00.952 "num_base_bdevs_operational": 3, 00:10:00.952 "base_bdevs_list": [ 00:10:00.952 { 00:10:00.952 "name": "BaseBdev1", 00:10:00.952 "uuid": "6687777d-6a29-4c24-97e8-520c63c42286", 00:10:00.952 "is_configured": true, 00:10:00.952 "data_offset": 2048, 00:10:00.952 "data_size": 63488 00:10:00.952 }, 00:10:00.952 { 00:10:00.952 "name": "BaseBdev2", 00:10:00.952 "uuid": "b82ef1fd-00ed-45ed-90a9-011505715700", 00:10:00.952 "is_configured": true, 00:10:00.952 "data_offset": 2048, 00:10:00.952 "data_size": 63488 00:10:00.952 }, 00:10:00.952 { 00:10:00.952 "name": "BaseBdev3", 00:10:00.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.952 "is_configured": false, 00:10:00.952 "data_offset": 0, 00:10:00.952 "data_size": 0 00:10:00.952 } 00:10:00.952 ] 00:10:00.952 }' 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.952 11:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.212 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:01.212 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.212 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.212 [2024-11-20 11:19:44.315817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:01.212 [2024-11-20 11:19:44.316248] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:01.212 [2024-11-20 11:19:44.316320] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:01.212 [2024-11-20 11:19:44.316672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:01.212 BaseBdev3 00:10:01.212 [2024-11-20 11:19:44.316891] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:01.212 [2024-11-20 11:19:44.316904] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:01.212 [2024-11-20 11:19:44.317097] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.212 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.212 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:01.212 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:01.212 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.212 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:01.212 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.212 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.212 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:01.212 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.212 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.472 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.472 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:01.472 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.472 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.472 [ 00:10:01.472 { 00:10:01.472 "name": "BaseBdev3", 00:10:01.472 "aliases": [ 00:10:01.472 "5d45bb02-4d4f-4a6a-9b60-cc5f5a8062e6" 00:10:01.472 ], 00:10:01.472 "product_name": "Malloc disk", 00:10:01.472 "block_size": 512, 00:10:01.472 "num_blocks": 65536, 00:10:01.472 "uuid": "5d45bb02-4d4f-4a6a-9b60-cc5f5a8062e6", 00:10:01.472 "assigned_rate_limits": { 00:10:01.472 "rw_ios_per_sec": 0, 00:10:01.472 "rw_mbytes_per_sec": 0, 00:10:01.472 "r_mbytes_per_sec": 0, 00:10:01.472 "w_mbytes_per_sec": 0 00:10:01.472 }, 00:10:01.472 "claimed": true, 00:10:01.472 "claim_type": "exclusive_write", 00:10:01.472 "zoned": false, 00:10:01.472 "supported_io_types": { 00:10:01.472 "read": true, 00:10:01.472 "write": true, 00:10:01.472 "unmap": true, 00:10:01.472 "flush": true, 00:10:01.472 "reset": true, 00:10:01.472 "nvme_admin": false, 00:10:01.472 "nvme_io": false, 00:10:01.472 "nvme_io_md": false, 00:10:01.472 "write_zeroes": true, 00:10:01.472 "zcopy": true, 00:10:01.472 "get_zone_info": false, 00:10:01.472 "zone_management": false, 00:10:01.472 "zone_append": false, 00:10:01.472 "compare": false, 00:10:01.472 "compare_and_write": false, 00:10:01.472 "abort": true, 00:10:01.472 "seek_hole": false, 00:10:01.472 "seek_data": false, 00:10:01.472 "copy": true, 00:10:01.472 "nvme_iov_md": false 00:10:01.472 }, 00:10:01.472 "memory_domains": [ 00:10:01.472 { 00:10:01.472 "dma_device_id": "system", 00:10:01.472 "dma_device_type": 1 00:10:01.472 }, 00:10:01.472 { 00:10:01.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.473 "dma_device_type": 2 00:10:01.473 } 00:10:01.473 ], 00:10:01.473 "driver_specific": {} 00:10:01.473 } 00:10:01.473 ] 00:10:01.473 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.473 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:01.473 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:01.473 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:01.473 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:01.473 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.473 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.473 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.473 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.473 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.473 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.473 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.473 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.473 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.473 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.473 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.473 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.473 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.473 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.473 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.473 "name": "Existed_Raid", 00:10:01.473 "uuid": "eef718b4-7b91-4260-b32b-1555b14a6217", 00:10:01.473 "strip_size_kb": 64, 00:10:01.473 "state": "online", 00:10:01.473 "raid_level": "raid0", 00:10:01.473 "superblock": true, 00:10:01.473 "num_base_bdevs": 3, 00:10:01.473 "num_base_bdevs_discovered": 3, 00:10:01.473 "num_base_bdevs_operational": 3, 00:10:01.473 "base_bdevs_list": [ 00:10:01.473 { 00:10:01.473 "name": "BaseBdev1", 00:10:01.473 "uuid": "6687777d-6a29-4c24-97e8-520c63c42286", 00:10:01.473 "is_configured": true, 00:10:01.473 "data_offset": 2048, 00:10:01.473 "data_size": 63488 00:10:01.473 }, 00:10:01.473 { 00:10:01.473 "name": "BaseBdev2", 00:10:01.473 "uuid": "b82ef1fd-00ed-45ed-90a9-011505715700", 00:10:01.473 "is_configured": true, 00:10:01.473 "data_offset": 2048, 00:10:01.473 "data_size": 63488 00:10:01.473 }, 00:10:01.473 { 00:10:01.473 "name": "BaseBdev3", 00:10:01.473 "uuid": "5d45bb02-4d4f-4a6a-9b60-cc5f5a8062e6", 00:10:01.473 "is_configured": true, 00:10:01.473 "data_offset": 2048, 00:10:01.473 "data_size": 63488 00:10:01.473 } 00:10:01.473 ] 00:10:01.473 }' 00:10:01.473 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.473 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.732 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:01.732 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:01.732 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:01.732 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:01.732 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:01.732 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:01.732 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:01.733 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.733 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:01.733 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.733 [2024-11-20 11:19:44.832049] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.733 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.992 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.992 "name": "Existed_Raid", 00:10:01.992 "aliases": [ 00:10:01.992 "eef718b4-7b91-4260-b32b-1555b14a6217" 00:10:01.992 ], 00:10:01.992 "product_name": "Raid Volume", 00:10:01.992 "block_size": 512, 00:10:01.992 "num_blocks": 190464, 00:10:01.992 "uuid": "eef718b4-7b91-4260-b32b-1555b14a6217", 00:10:01.992 "assigned_rate_limits": { 00:10:01.992 "rw_ios_per_sec": 0, 00:10:01.992 "rw_mbytes_per_sec": 0, 00:10:01.992 "r_mbytes_per_sec": 0, 00:10:01.992 "w_mbytes_per_sec": 0 00:10:01.992 }, 00:10:01.992 "claimed": false, 00:10:01.992 "zoned": false, 00:10:01.992 "supported_io_types": { 00:10:01.992 "read": true, 00:10:01.992 "write": true, 00:10:01.992 "unmap": true, 00:10:01.992 "flush": true, 00:10:01.992 "reset": true, 00:10:01.992 "nvme_admin": false, 00:10:01.992 "nvme_io": false, 00:10:01.992 "nvme_io_md": false, 00:10:01.992 "write_zeroes": true, 00:10:01.992 "zcopy": false, 00:10:01.992 "get_zone_info": false, 00:10:01.992 "zone_management": false, 00:10:01.992 "zone_append": false, 00:10:01.992 "compare": false, 00:10:01.992 "compare_and_write": false, 00:10:01.992 "abort": false, 00:10:01.992 "seek_hole": false, 00:10:01.992 "seek_data": false, 00:10:01.992 "copy": false, 00:10:01.992 "nvme_iov_md": false 00:10:01.992 }, 00:10:01.992 "memory_domains": [ 00:10:01.992 { 00:10:01.992 "dma_device_id": "system", 00:10:01.992 "dma_device_type": 1 00:10:01.992 }, 00:10:01.992 { 00:10:01.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.992 "dma_device_type": 2 00:10:01.992 }, 00:10:01.992 { 00:10:01.992 "dma_device_id": "system", 00:10:01.992 "dma_device_type": 1 00:10:01.992 }, 00:10:01.992 { 00:10:01.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.992 "dma_device_type": 2 00:10:01.992 }, 00:10:01.992 { 00:10:01.992 "dma_device_id": "system", 00:10:01.992 "dma_device_type": 1 00:10:01.992 }, 00:10:01.992 { 00:10:01.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.992 "dma_device_type": 2 00:10:01.992 } 00:10:01.992 ], 00:10:01.992 "driver_specific": { 00:10:01.992 "raid": { 00:10:01.992 "uuid": "eef718b4-7b91-4260-b32b-1555b14a6217", 00:10:01.992 "strip_size_kb": 64, 00:10:01.992 "state": "online", 00:10:01.992 "raid_level": "raid0", 00:10:01.992 "superblock": true, 00:10:01.992 "num_base_bdevs": 3, 00:10:01.992 "num_base_bdevs_discovered": 3, 00:10:01.992 "num_base_bdevs_operational": 3, 00:10:01.992 "base_bdevs_list": [ 00:10:01.992 { 00:10:01.992 "name": "BaseBdev1", 00:10:01.992 "uuid": "6687777d-6a29-4c24-97e8-520c63c42286", 00:10:01.992 "is_configured": true, 00:10:01.992 "data_offset": 2048, 00:10:01.992 "data_size": 63488 00:10:01.992 }, 00:10:01.992 { 00:10:01.992 "name": "BaseBdev2", 00:10:01.992 "uuid": "b82ef1fd-00ed-45ed-90a9-011505715700", 00:10:01.992 "is_configured": true, 00:10:01.992 "data_offset": 2048, 00:10:01.992 "data_size": 63488 00:10:01.992 }, 00:10:01.992 { 00:10:01.992 "name": "BaseBdev3", 00:10:01.992 "uuid": "5d45bb02-4d4f-4a6a-9b60-cc5f5a8062e6", 00:10:01.992 "is_configured": true, 00:10:01.992 "data_offset": 2048, 00:10:01.992 "data_size": 63488 00:10:01.992 } 00:10:01.992 ] 00:10:01.992 } 00:10:01.992 } 00:10:01.992 }' 00:10:01.992 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.992 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:01.992 BaseBdev2 00:10:01.992 BaseBdev3' 00:10:01.992 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.992 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:01.992 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.992 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.992 11:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:01.992 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.992 11:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.992 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.993 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.993 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.993 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.993 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:01.993 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.993 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.993 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.993 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.993 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.993 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.993 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.993 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:01.993 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.993 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.993 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.993 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.252 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.252 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.252 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.253 [2024-11-20 11:19:45.139723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:02.253 [2024-11-20 11:19:45.139759] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.253 [2024-11-20 11:19:45.139823] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.253 "name": "Existed_Raid", 00:10:02.253 "uuid": "eef718b4-7b91-4260-b32b-1555b14a6217", 00:10:02.253 "strip_size_kb": 64, 00:10:02.253 "state": "offline", 00:10:02.253 "raid_level": "raid0", 00:10:02.253 "superblock": true, 00:10:02.253 "num_base_bdevs": 3, 00:10:02.253 "num_base_bdevs_discovered": 2, 00:10:02.253 "num_base_bdevs_operational": 2, 00:10:02.253 "base_bdevs_list": [ 00:10:02.253 { 00:10:02.253 "name": null, 00:10:02.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.253 "is_configured": false, 00:10:02.253 "data_offset": 0, 00:10:02.253 "data_size": 63488 00:10:02.253 }, 00:10:02.253 { 00:10:02.253 "name": "BaseBdev2", 00:10:02.253 "uuid": "b82ef1fd-00ed-45ed-90a9-011505715700", 00:10:02.253 "is_configured": true, 00:10:02.253 "data_offset": 2048, 00:10:02.253 "data_size": 63488 00:10:02.253 }, 00:10:02.253 { 00:10:02.253 "name": "BaseBdev3", 00:10:02.253 "uuid": "5d45bb02-4d4f-4a6a-9b60-cc5f5a8062e6", 00:10:02.253 "is_configured": true, 00:10:02.253 "data_offset": 2048, 00:10:02.253 "data_size": 63488 00:10:02.253 } 00:10:02.253 ] 00:10:02.253 }' 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.253 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.821 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:02.821 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.821 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.821 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.821 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.821 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:02.821 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.821 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:02.821 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:02.821 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:02.821 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.821 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.821 [2024-11-20 11:19:45.719726] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:02.821 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.821 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:02.821 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.821 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.821 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:02.821 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.821 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.821 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.821 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:02.821 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:02.821 11:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:02.821 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.821 11:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.821 [2024-11-20 11:19:45.888661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:02.821 [2024-11-20 11:19:45.888823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.082 BaseBdev2 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.082 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.082 [ 00:10:03.082 { 00:10:03.082 "name": "BaseBdev2", 00:10:03.082 "aliases": [ 00:10:03.082 "64e92792-2fb4-48e8-9853-64007b2f860f" 00:10:03.082 ], 00:10:03.082 "product_name": "Malloc disk", 00:10:03.082 "block_size": 512, 00:10:03.082 "num_blocks": 65536, 00:10:03.082 "uuid": "64e92792-2fb4-48e8-9853-64007b2f860f", 00:10:03.082 "assigned_rate_limits": { 00:10:03.082 "rw_ios_per_sec": 0, 00:10:03.082 "rw_mbytes_per_sec": 0, 00:10:03.082 "r_mbytes_per_sec": 0, 00:10:03.083 "w_mbytes_per_sec": 0 00:10:03.083 }, 00:10:03.083 "claimed": false, 00:10:03.083 "zoned": false, 00:10:03.083 "supported_io_types": { 00:10:03.083 "read": true, 00:10:03.083 "write": true, 00:10:03.083 "unmap": true, 00:10:03.083 "flush": true, 00:10:03.083 "reset": true, 00:10:03.083 "nvme_admin": false, 00:10:03.083 "nvme_io": false, 00:10:03.083 "nvme_io_md": false, 00:10:03.083 "write_zeroes": true, 00:10:03.083 "zcopy": true, 00:10:03.083 "get_zone_info": false, 00:10:03.083 "zone_management": false, 00:10:03.083 "zone_append": false, 00:10:03.083 "compare": false, 00:10:03.083 "compare_and_write": false, 00:10:03.083 "abort": true, 00:10:03.083 "seek_hole": false, 00:10:03.083 "seek_data": false, 00:10:03.083 "copy": true, 00:10:03.083 "nvme_iov_md": false 00:10:03.083 }, 00:10:03.083 "memory_domains": [ 00:10:03.083 { 00:10:03.083 "dma_device_id": "system", 00:10:03.083 "dma_device_type": 1 00:10:03.083 }, 00:10:03.083 { 00:10:03.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.083 "dma_device_type": 2 00:10:03.083 } 00:10:03.083 ], 00:10:03.083 "driver_specific": {} 00:10:03.083 } 00:10:03.083 ] 00:10:03.083 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.083 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:03.083 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:03.083 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:03.083 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:03.083 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.083 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.343 BaseBdev3 00:10:03.343 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.343 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:03.343 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:03.343 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.343 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:03.343 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.343 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.343 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.343 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.343 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.343 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.343 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:03.343 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.343 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.343 [ 00:10:03.343 { 00:10:03.343 "name": "BaseBdev3", 00:10:03.343 "aliases": [ 00:10:03.343 "b00eb297-feb0-449a-bcee-e3bdce3d1841" 00:10:03.343 ], 00:10:03.343 "product_name": "Malloc disk", 00:10:03.343 "block_size": 512, 00:10:03.343 "num_blocks": 65536, 00:10:03.343 "uuid": "b00eb297-feb0-449a-bcee-e3bdce3d1841", 00:10:03.343 "assigned_rate_limits": { 00:10:03.343 "rw_ios_per_sec": 0, 00:10:03.343 "rw_mbytes_per_sec": 0, 00:10:03.343 "r_mbytes_per_sec": 0, 00:10:03.343 "w_mbytes_per_sec": 0 00:10:03.343 }, 00:10:03.343 "claimed": false, 00:10:03.343 "zoned": false, 00:10:03.343 "supported_io_types": { 00:10:03.343 "read": true, 00:10:03.343 "write": true, 00:10:03.343 "unmap": true, 00:10:03.343 "flush": true, 00:10:03.343 "reset": true, 00:10:03.343 "nvme_admin": false, 00:10:03.343 "nvme_io": false, 00:10:03.343 "nvme_io_md": false, 00:10:03.343 "write_zeroes": true, 00:10:03.343 "zcopy": true, 00:10:03.343 "get_zone_info": false, 00:10:03.343 "zone_management": false, 00:10:03.343 "zone_append": false, 00:10:03.343 "compare": false, 00:10:03.343 "compare_and_write": false, 00:10:03.343 "abort": true, 00:10:03.343 "seek_hole": false, 00:10:03.343 "seek_data": false, 00:10:03.343 "copy": true, 00:10:03.343 "nvme_iov_md": false 00:10:03.343 }, 00:10:03.343 "memory_domains": [ 00:10:03.343 { 00:10:03.343 "dma_device_id": "system", 00:10:03.343 "dma_device_type": 1 00:10:03.343 }, 00:10:03.343 { 00:10:03.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.343 "dma_device_type": 2 00:10:03.343 } 00:10:03.343 ], 00:10:03.343 "driver_specific": {} 00:10:03.343 } 00:10:03.343 ] 00:10:03.343 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.343 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:03.343 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:03.343 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:03.343 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:03.343 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.343 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.343 [2024-11-20 11:19:46.240834] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:03.343 [2024-11-20 11:19:46.241394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:03.343 [2024-11-20 11:19:46.241516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.343 [2024-11-20 11:19:46.243778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:03.343 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.343 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:03.343 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.344 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.344 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.344 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.344 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.344 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.344 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.344 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.344 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.344 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.344 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.344 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.344 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.344 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.344 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.344 "name": "Existed_Raid", 00:10:03.344 "uuid": "1902aa95-692b-4355-8d02-c494b47926bd", 00:10:03.344 "strip_size_kb": 64, 00:10:03.344 "state": "configuring", 00:10:03.344 "raid_level": "raid0", 00:10:03.344 "superblock": true, 00:10:03.344 "num_base_bdevs": 3, 00:10:03.344 "num_base_bdevs_discovered": 2, 00:10:03.344 "num_base_bdevs_operational": 3, 00:10:03.344 "base_bdevs_list": [ 00:10:03.344 { 00:10:03.344 "name": "BaseBdev1", 00:10:03.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.344 "is_configured": false, 00:10:03.344 "data_offset": 0, 00:10:03.344 "data_size": 0 00:10:03.344 }, 00:10:03.344 { 00:10:03.344 "name": "BaseBdev2", 00:10:03.344 "uuid": "64e92792-2fb4-48e8-9853-64007b2f860f", 00:10:03.344 "is_configured": true, 00:10:03.344 "data_offset": 2048, 00:10:03.344 "data_size": 63488 00:10:03.344 }, 00:10:03.344 { 00:10:03.344 "name": "BaseBdev3", 00:10:03.344 "uuid": "b00eb297-feb0-449a-bcee-e3bdce3d1841", 00:10:03.344 "is_configured": true, 00:10:03.344 "data_offset": 2048, 00:10:03.344 "data_size": 63488 00:10:03.344 } 00:10:03.344 ] 00:10:03.344 }' 00:10:03.344 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.344 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.603 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:03.603 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.603 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.603 [2024-11-20 11:19:46.676076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:03.603 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.603 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:03.603 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.603 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.603 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.603 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.603 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.603 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.603 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.603 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.603 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.603 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.603 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.603 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.603 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.603 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.863 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.863 "name": "Existed_Raid", 00:10:03.863 "uuid": "1902aa95-692b-4355-8d02-c494b47926bd", 00:10:03.863 "strip_size_kb": 64, 00:10:03.863 "state": "configuring", 00:10:03.863 "raid_level": "raid0", 00:10:03.863 "superblock": true, 00:10:03.863 "num_base_bdevs": 3, 00:10:03.863 "num_base_bdevs_discovered": 1, 00:10:03.863 "num_base_bdevs_operational": 3, 00:10:03.863 "base_bdevs_list": [ 00:10:03.864 { 00:10:03.864 "name": "BaseBdev1", 00:10:03.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.864 "is_configured": false, 00:10:03.864 "data_offset": 0, 00:10:03.864 "data_size": 0 00:10:03.864 }, 00:10:03.864 { 00:10:03.864 "name": null, 00:10:03.864 "uuid": "64e92792-2fb4-48e8-9853-64007b2f860f", 00:10:03.864 "is_configured": false, 00:10:03.864 "data_offset": 0, 00:10:03.864 "data_size": 63488 00:10:03.864 }, 00:10:03.864 { 00:10:03.864 "name": "BaseBdev3", 00:10:03.864 "uuid": "b00eb297-feb0-449a-bcee-e3bdce3d1841", 00:10:03.864 "is_configured": true, 00:10:03.864 "data_offset": 2048, 00:10:03.864 "data_size": 63488 00:10:03.864 } 00:10:03.864 ] 00:10:03.864 }' 00:10:03.864 11:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.864 11:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.123 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:04.123 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.123 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.123 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.123 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.123 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:04.123 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:04.123 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.124 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.384 [2024-11-20 11:19:47.246724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.384 BaseBdev1 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.384 [ 00:10:04.384 { 00:10:04.384 "name": "BaseBdev1", 00:10:04.384 "aliases": [ 00:10:04.384 "7cabbdba-5a72-4c2f-b50b-50acf35e2c00" 00:10:04.384 ], 00:10:04.384 "product_name": "Malloc disk", 00:10:04.384 "block_size": 512, 00:10:04.384 "num_blocks": 65536, 00:10:04.384 "uuid": "7cabbdba-5a72-4c2f-b50b-50acf35e2c00", 00:10:04.384 "assigned_rate_limits": { 00:10:04.384 "rw_ios_per_sec": 0, 00:10:04.384 "rw_mbytes_per_sec": 0, 00:10:04.384 "r_mbytes_per_sec": 0, 00:10:04.384 "w_mbytes_per_sec": 0 00:10:04.384 }, 00:10:04.384 "claimed": true, 00:10:04.384 "claim_type": "exclusive_write", 00:10:04.384 "zoned": false, 00:10:04.384 "supported_io_types": { 00:10:04.384 "read": true, 00:10:04.384 "write": true, 00:10:04.384 "unmap": true, 00:10:04.384 "flush": true, 00:10:04.384 "reset": true, 00:10:04.384 "nvme_admin": false, 00:10:04.384 "nvme_io": false, 00:10:04.384 "nvme_io_md": false, 00:10:04.384 "write_zeroes": true, 00:10:04.384 "zcopy": true, 00:10:04.384 "get_zone_info": false, 00:10:04.384 "zone_management": false, 00:10:04.384 "zone_append": false, 00:10:04.384 "compare": false, 00:10:04.384 "compare_and_write": false, 00:10:04.384 "abort": true, 00:10:04.384 "seek_hole": false, 00:10:04.384 "seek_data": false, 00:10:04.384 "copy": true, 00:10:04.384 "nvme_iov_md": false 00:10:04.384 }, 00:10:04.384 "memory_domains": [ 00:10:04.384 { 00:10:04.384 "dma_device_id": "system", 00:10:04.384 "dma_device_type": 1 00:10:04.384 }, 00:10:04.384 { 00:10:04.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.384 "dma_device_type": 2 00:10:04.384 } 00:10:04.384 ], 00:10:04.384 "driver_specific": {} 00:10:04.384 } 00:10:04.384 ] 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.384 "name": "Existed_Raid", 00:10:04.384 "uuid": "1902aa95-692b-4355-8d02-c494b47926bd", 00:10:04.384 "strip_size_kb": 64, 00:10:04.384 "state": "configuring", 00:10:04.384 "raid_level": "raid0", 00:10:04.384 "superblock": true, 00:10:04.384 "num_base_bdevs": 3, 00:10:04.384 "num_base_bdevs_discovered": 2, 00:10:04.384 "num_base_bdevs_operational": 3, 00:10:04.384 "base_bdevs_list": [ 00:10:04.384 { 00:10:04.384 "name": "BaseBdev1", 00:10:04.384 "uuid": "7cabbdba-5a72-4c2f-b50b-50acf35e2c00", 00:10:04.384 "is_configured": true, 00:10:04.384 "data_offset": 2048, 00:10:04.384 "data_size": 63488 00:10:04.384 }, 00:10:04.384 { 00:10:04.384 "name": null, 00:10:04.384 "uuid": "64e92792-2fb4-48e8-9853-64007b2f860f", 00:10:04.384 "is_configured": false, 00:10:04.384 "data_offset": 0, 00:10:04.384 "data_size": 63488 00:10:04.384 }, 00:10:04.384 { 00:10:04.384 "name": "BaseBdev3", 00:10:04.384 "uuid": "b00eb297-feb0-449a-bcee-e3bdce3d1841", 00:10:04.384 "is_configured": true, 00:10:04.384 "data_offset": 2048, 00:10:04.384 "data_size": 63488 00:10:04.384 } 00:10:04.384 ] 00:10:04.384 }' 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.384 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.643 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:04.643 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.644 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.644 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.644 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.644 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:04.644 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:04.644 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.644 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.644 [2024-11-20 11:19:47.746005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:04.644 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.644 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:04.644 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.644 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.644 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.644 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.644 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.644 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.644 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.644 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.644 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.644 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.644 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.644 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.644 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.904 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.904 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.904 "name": "Existed_Raid", 00:10:04.904 "uuid": "1902aa95-692b-4355-8d02-c494b47926bd", 00:10:04.904 "strip_size_kb": 64, 00:10:04.904 "state": "configuring", 00:10:04.904 "raid_level": "raid0", 00:10:04.904 "superblock": true, 00:10:04.904 "num_base_bdevs": 3, 00:10:04.904 "num_base_bdevs_discovered": 1, 00:10:04.904 "num_base_bdevs_operational": 3, 00:10:04.904 "base_bdevs_list": [ 00:10:04.904 { 00:10:04.904 "name": "BaseBdev1", 00:10:04.904 "uuid": "7cabbdba-5a72-4c2f-b50b-50acf35e2c00", 00:10:04.904 "is_configured": true, 00:10:04.904 "data_offset": 2048, 00:10:04.904 "data_size": 63488 00:10:04.904 }, 00:10:04.904 { 00:10:04.904 "name": null, 00:10:04.904 "uuid": "64e92792-2fb4-48e8-9853-64007b2f860f", 00:10:04.904 "is_configured": false, 00:10:04.904 "data_offset": 0, 00:10:04.904 "data_size": 63488 00:10:04.904 }, 00:10:04.904 { 00:10:04.904 "name": null, 00:10:04.904 "uuid": "b00eb297-feb0-449a-bcee-e3bdce3d1841", 00:10:04.904 "is_configured": false, 00:10:04.904 "data_offset": 0, 00:10:04.904 "data_size": 63488 00:10:04.904 } 00:10:04.904 ] 00:10:04.904 }' 00:10:04.904 11:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.904 11:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.163 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.163 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:05.163 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.163 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.163 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.163 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:05.163 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:05.163 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.164 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.164 [2024-11-20 11:19:48.245283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:05.164 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.164 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:05.164 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.164 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.164 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.164 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.164 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.164 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.164 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.164 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.164 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.164 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.164 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.164 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.164 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.164 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.423 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.423 "name": "Existed_Raid", 00:10:05.423 "uuid": "1902aa95-692b-4355-8d02-c494b47926bd", 00:10:05.423 "strip_size_kb": 64, 00:10:05.423 "state": "configuring", 00:10:05.423 "raid_level": "raid0", 00:10:05.423 "superblock": true, 00:10:05.423 "num_base_bdevs": 3, 00:10:05.423 "num_base_bdevs_discovered": 2, 00:10:05.423 "num_base_bdevs_operational": 3, 00:10:05.423 "base_bdevs_list": [ 00:10:05.423 { 00:10:05.423 "name": "BaseBdev1", 00:10:05.423 "uuid": "7cabbdba-5a72-4c2f-b50b-50acf35e2c00", 00:10:05.423 "is_configured": true, 00:10:05.423 "data_offset": 2048, 00:10:05.423 "data_size": 63488 00:10:05.423 }, 00:10:05.423 { 00:10:05.423 "name": null, 00:10:05.423 "uuid": "64e92792-2fb4-48e8-9853-64007b2f860f", 00:10:05.423 "is_configured": false, 00:10:05.423 "data_offset": 0, 00:10:05.423 "data_size": 63488 00:10:05.423 }, 00:10:05.423 { 00:10:05.423 "name": "BaseBdev3", 00:10:05.423 "uuid": "b00eb297-feb0-449a-bcee-e3bdce3d1841", 00:10:05.423 "is_configured": true, 00:10:05.423 "data_offset": 2048, 00:10:05.423 "data_size": 63488 00:10:05.423 } 00:10:05.423 ] 00:10:05.423 }' 00:10:05.423 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.423 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.682 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.682 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:05.682 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.682 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.682 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.682 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:05.682 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:05.682 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.682 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.682 [2024-11-20 11:19:48.740578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:05.941 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.941 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:05.941 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.942 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.942 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.942 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.942 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.942 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.942 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.942 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.942 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.942 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.942 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.942 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.942 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.942 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.942 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.942 "name": "Existed_Raid", 00:10:05.942 "uuid": "1902aa95-692b-4355-8d02-c494b47926bd", 00:10:05.942 "strip_size_kb": 64, 00:10:05.942 "state": "configuring", 00:10:05.942 "raid_level": "raid0", 00:10:05.942 "superblock": true, 00:10:05.942 "num_base_bdevs": 3, 00:10:05.942 "num_base_bdevs_discovered": 1, 00:10:05.942 "num_base_bdevs_operational": 3, 00:10:05.942 "base_bdevs_list": [ 00:10:05.942 { 00:10:05.942 "name": null, 00:10:05.942 "uuid": "7cabbdba-5a72-4c2f-b50b-50acf35e2c00", 00:10:05.942 "is_configured": false, 00:10:05.942 "data_offset": 0, 00:10:05.942 "data_size": 63488 00:10:05.942 }, 00:10:05.942 { 00:10:05.942 "name": null, 00:10:05.942 "uuid": "64e92792-2fb4-48e8-9853-64007b2f860f", 00:10:05.942 "is_configured": false, 00:10:05.942 "data_offset": 0, 00:10:05.942 "data_size": 63488 00:10:05.942 }, 00:10:05.942 { 00:10:05.942 "name": "BaseBdev3", 00:10:05.942 "uuid": "b00eb297-feb0-449a-bcee-e3bdce3d1841", 00:10:05.942 "is_configured": true, 00:10:05.942 "data_offset": 2048, 00:10:05.942 "data_size": 63488 00:10:05.942 } 00:10:05.942 ] 00:10:05.942 }' 00:10:05.942 11:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.942 11:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.200 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.200 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:06.200 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.200 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.502 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.502 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:06.502 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:06.502 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.502 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.502 [2024-11-20 11:19:49.359685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.502 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.502 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:06.502 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.502 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.502 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.502 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.502 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.502 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.502 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.502 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.502 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.502 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.502 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.502 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.502 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.502 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.502 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.502 "name": "Existed_Raid", 00:10:06.502 "uuid": "1902aa95-692b-4355-8d02-c494b47926bd", 00:10:06.502 "strip_size_kb": 64, 00:10:06.502 "state": "configuring", 00:10:06.502 "raid_level": "raid0", 00:10:06.502 "superblock": true, 00:10:06.502 "num_base_bdevs": 3, 00:10:06.502 "num_base_bdevs_discovered": 2, 00:10:06.502 "num_base_bdevs_operational": 3, 00:10:06.502 "base_bdevs_list": [ 00:10:06.502 { 00:10:06.502 "name": null, 00:10:06.502 "uuid": "7cabbdba-5a72-4c2f-b50b-50acf35e2c00", 00:10:06.502 "is_configured": false, 00:10:06.502 "data_offset": 0, 00:10:06.502 "data_size": 63488 00:10:06.502 }, 00:10:06.502 { 00:10:06.502 "name": "BaseBdev2", 00:10:06.502 "uuid": "64e92792-2fb4-48e8-9853-64007b2f860f", 00:10:06.502 "is_configured": true, 00:10:06.502 "data_offset": 2048, 00:10:06.502 "data_size": 63488 00:10:06.502 }, 00:10:06.502 { 00:10:06.502 "name": "BaseBdev3", 00:10:06.502 "uuid": "b00eb297-feb0-449a-bcee-e3bdce3d1841", 00:10:06.502 "is_configured": true, 00:10:06.502 "data_offset": 2048, 00:10:06.502 "data_size": 63488 00:10:06.502 } 00:10:06.502 ] 00:10:06.502 }' 00:10:06.502 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.502 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.762 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:06.762 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.762 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.762 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.762 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.023 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:07.023 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.023 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.023 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.023 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:07.023 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.023 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7cabbdba-5a72-4c2f-b50b-50acf35e2c00 00:10:07.023 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.023 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.023 [2024-11-20 11:19:49.981349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:07.023 [2024-11-20 11:19:49.981775] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:07.023 [2024-11-20 11:19:49.981802] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:07.023 [2024-11-20 11:19:49.982090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:07.023 NewBaseBdev 00:10:07.023 [2024-11-20 11:19:49.982261] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:07.023 [2024-11-20 11:19:49.982272] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:07.023 [2024-11-20 11:19:49.982430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.023 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.023 11:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:07.023 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:07.023 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.023 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:07.023 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.023 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.023 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.023 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.023 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.023 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.023 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:07.023 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.023 11:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.023 [ 00:10:07.023 { 00:10:07.023 "name": "NewBaseBdev", 00:10:07.023 "aliases": [ 00:10:07.023 "7cabbdba-5a72-4c2f-b50b-50acf35e2c00" 00:10:07.023 ], 00:10:07.023 "product_name": "Malloc disk", 00:10:07.023 "block_size": 512, 00:10:07.023 "num_blocks": 65536, 00:10:07.023 "uuid": "7cabbdba-5a72-4c2f-b50b-50acf35e2c00", 00:10:07.023 "assigned_rate_limits": { 00:10:07.023 "rw_ios_per_sec": 0, 00:10:07.023 "rw_mbytes_per_sec": 0, 00:10:07.023 "r_mbytes_per_sec": 0, 00:10:07.023 "w_mbytes_per_sec": 0 00:10:07.023 }, 00:10:07.023 "claimed": true, 00:10:07.023 "claim_type": "exclusive_write", 00:10:07.023 "zoned": false, 00:10:07.023 "supported_io_types": { 00:10:07.023 "read": true, 00:10:07.023 "write": true, 00:10:07.023 "unmap": true, 00:10:07.023 "flush": true, 00:10:07.023 "reset": true, 00:10:07.023 "nvme_admin": false, 00:10:07.023 "nvme_io": false, 00:10:07.023 "nvme_io_md": false, 00:10:07.023 "write_zeroes": true, 00:10:07.023 "zcopy": true, 00:10:07.023 "get_zone_info": false, 00:10:07.023 "zone_management": false, 00:10:07.023 "zone_append": false, 00:10:07.023 "compare": false, 00:10:07.023 "compare_and_write": false, 00:10:07.023 "abort": true, 00:10:07.023 "seek_hole": false, 00:10:07.023 "seek_data": false, 00:10:07.023 "copy": true, 00:10:07.023 "nvme_iov_md": false 00:10:07.023 }, 00:10:07.023 "memory_domains": [ 00:10:07.023 { 00:10:07.023 "dma_device_id": "system", 00:10:07.023 "dma_device_type": 1 00:10:07.023 }, 00:10:07.023 { 00:10:07.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.023 "dma_device_type": 2 00:10:07.023 } 00:10:07.023 ], 00:10:07.023 "driver_specific": {} 00:10:07.023 } 00:10:07.023 ] 00:10:07.023 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.023 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:07.023 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:07.023 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.023 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.023 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.023 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.023 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.023 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.023 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.023 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.023 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.023 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.023 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.023 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.023 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.023 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.023 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.023 "name": "Existed_Raid", 00:10:07.023 "uuid": "1902aa95-692b-4355-8d02-c494b47926bd", 00:10:07.023 "strip_size_kb": 64, 00:10:07.023 "state": "online", 00:10:07.023 "raid_level": "raid0", 00:10:07.023 "superblock": true, 00:10:07.023 "num_base_bdevs": 3, 00:10:07.023 "num_base_bdevs_discovered": 3, 00:10:07.023 "num_base_bdevs_operational": 3, 00:10:07.023 "base_bdevs_list": [ 00:10:07.023 { 00:10:07.023 "name": "NewBaseBdev", 00:10:07.023 "uuid": "7cabbdba-5a72-4c2f-b50b-50acf35e2c00", 00:10:07.023 "is_configured": true, 00:10:07.023 "data_offset": 2048, 00:10:07.023 "data_size": 63488 00:10:07.023 }, 00:10:07.023 { 00:10:07.023 "name": "BaseBdev2", 00:10:07.023 "uuid": "64e92792-2fb4-48e8-9853-64007b2f860f", 00:10:07.023 "is_configured": true, 00:10:07.023 "data_offset": 2048, 00:10:07.023 "data_size": 63488 00:10:07.023 }, 00:10:07.023 { 00:10:07.023 "name": "BaseBdev3", 00:10:07.023 "uuid": "b00eb297-feb0-449a-bcee-e3bdce3d1841", 00:10:07.023 "is_configured": true, 00:10:07.023 "data_offset": 2048, 00:10:07.023 "data_size": 63488 00:10:07.023 } 00:10:07.023 ] 00:10:07.023 }' 00:10:07.023 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.023 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.593 [2024-11-20 11:19:50.480973] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:07.593 "name": "Existed_Raid", 00:10:07.593 "aliases": [ 00:10:07.593 "1902aa95-692b-4355-8d02-c494b47926bd" 00:10:07.593 ], 00:10:07.593 "product_name": "Raid Volume", 00:10:07.593 "block_size": 512, 00:10:07.593 "num_blocks": 190464, 00:10:07.593 "uuid": "1902aa95-692b-4355-8d02-c494b47926bd", 00:10:07.593 "assigned_rate_limits": { 00:10:07.593 "rw_ios_per_sec": 0, 00:10:07.593 "rw_mbytes_per_sec": 0, 00:10:07.593 "r_mbytes_per_sec": 0, 00:10:07.593 "w_mbytes_per_sec": 0 00:10:07.593 }, 00:10:07.593 "claimed": false, 00:10:07.593 "zoned": false, 00:10:07.593 "supported_io_types": { 00:10:07.593 "read": true, 00:10:07.593 "write": true, 00:10:07.593 "unmap": true, 00:10:07.593 "flush": true, 00:10:07.593 "reset": true, 00:10:07.593 "nvme_admin": false, 00:10:07.593 "nvme_io": false, 00:10:07.593 "nvme_io_md": false, 00:10:07.593 "write_zeroes": true, 00:10:07.593 "zcopy": false, 00:10:07.593 "get_zone_info": false, 00:10:07.593 "zone_management": false, 00:10:07.593 "zone_append": false, 00:10:07.593 "compare": false, 00:10:07.593 "compare_and_write": false, 00:10:07.593 "abort": false, 00:10:07.593 "seek_hole": false, 00:10:07.593 "seek_data": false, 00:10:07.593 "copy": false, 00:10:07.593 "nvme_iov_md": false 00:10:07.593 }, 00:10:07.593 "memory_domains": [ 00:10:07.593 { 00:10:07.593 "dma_device_id": "system", 00:10:07.593 "dma_device_type": 1 00:10:07.593 }, 00:10:07.593 { 00:10:07.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.593 "dma_device_type": 2 00:10:07.593 }, 00:10:07.593 { 00:10:07.593 "dma_device_id": "system", 00:10:07.593 "dma_device_type": 1 00:10:07.593 }, 00:10:07.593 { 00:10:07.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.593 "dma_device_type": 2 00:10:07.593 }, 00:10:07.593 { 00:10:07.593 "dma_device_id": "system", 00:10:07.593 "dma_device_type": 1 00:10:07.593 }, 00:10:07.593 { 00:10:07.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.593 "dma_device_type": 2 00:10:07.593 } 00:10:07.593 ], 00:10:07.593 "driver_specific": { 00:10:07.593 "raid": { 00:10:07.593 "uuid": "1902aa95-692b-4355-8d02-c494b47926bd", 00:10:07.593 "strip_size_kb": 64, 00:10:07.593 "state": "online", 00:10:07.593 "raid_level": "raid0", 00:10:07.593 "superblock": true, 00:10:07.593 "num_base_bdevs": 3, 00:10:07.593 "num_base_bdevs_discovered": 3, 00:10:07.593 "num_base_bdevs_operational": 3, 00:10:07.593 "base_bdevs_list": [ 00:10:07.593 { 00:10:07.593 "name": "NewBaseBdev", 00:10:07.593 "uuid": "7cabbdba-5a72-4c2f-b50b-50acf35e2c00", 00:10:07.593 "is_configured": true, 00:10:07.593 "data_offset": 2048, 00:10:07.593 "data_size": 63488 00:10:07.593 }, 00:10:07.593 { 00:10:07.593 "name": "BaseBdev2", 00:10:07.593 "uuid": "64e92792-2fb4-48e8-9853-64007b2f860f", 00:10:07.593 "is_configured": true, 00:10:07.593 "data_offset": 2048, 00:10:07.593 "data_size": 63488 00:10:07.593 }, 00:10:07.593 { 00:10:07.593 "name": "BaseBdev3", 00:10:07.593 "uuid": "b00eb297-feb0-449a-bcee-e3bdce3d1841", 00:10:07.593 "is_configured": true, 00:10:07.593 "data_offset": 2048, 00:10:07.593 "data_size": 63488 00:10:07.593 } 00:10:07.593 ] 00:10:07.593 } 00:10:07.593 } 00:10:07.593 }' 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:07.593 BaseBdev2 00:10:07.593 BaseBdev3' 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.593 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.852 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.852 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.852 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.852 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:07.852 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.852 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.852 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.852 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.852 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.852 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.852 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:07.852 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.852 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.852 [2024-11-20 11:19:50.764142] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:07.852 [2024-11-20 11:19:50.764183] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.852 [2024-11-20 11:19:50.764283] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.852 [2024-11-20 11:19:50.764354] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.852 [2024-11-20 11:19:50.764375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:07.852 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.852 11:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64563 00:10:07.852 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64563 ']' 00:10:07.852 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64563 00:10:07.852 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:07.852 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.852 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64563 00:10:07.852 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.852 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.852 killing process with pid 64563 00:10:07.852 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64563' 00:10:07.852 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64563 00:10:07.852 [2024-11-20 11:19:50.808918] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:07.852 11:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64563 00:10:08.113 [2024-11-20 11:19:51.166862] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:09.488 11:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:09.488 00:10:09.488 real 0m11.116s 00:10:09.488 user 0m17.682s 00:10:09.488 sys 0m1.754s 00:10:09.488 11:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.488 11:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.488 ************************************ 00:10:09.488 END TEST raid_state_function_test_sb 00:10:09.488 ************************************ 00:10:09.488 11:19:52 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:10:09.488 11:19:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:09.488 11:19:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.488 11:19:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:09.488 ************************************ 00:10:09.488 START TEST raid_superblock_test 00:10:09.488 ************************************ 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65190 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65190 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65190 ']' 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.488 11:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.488 [2024-11-20 11:19:52.482267] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:10:09.488 [2024-11-20 11:19:52.482395] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65190 ] 00:10:09.746 [2024-11-20 11:19:52.660872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.746 [2024-11-20 11:19:52.779152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.004 [2024-11-20 11:19:52.981677] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.004 [2024-11-20 11:19:52.981750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.264 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.264 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:10.264 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:10.264 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.264 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:10.264 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:10.264 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:10.264 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:10.264 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:10.264 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:10.264 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:10.264 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.264 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.525 malloc1 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.525 [2024-11-20 11:19:53.391487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:10.525 [2024-11-20 11:19:53.391594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.525 [2024-11-20 11:19:53.391621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:10.525 [2024-11-20 11:19:53.391630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.525 [2024-11-20 11:19:53.393885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.525 [2024-11-20 11:19:53.393929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:10.525 pt1 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.525 malloc2 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.525 [2024-11-20 11:19:53.447983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:10.525 [2024-11-20 11:19:53.448045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.525 [2024-11-20 11:19:53.448068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:10.525 [2024-11-20 11:19:53.448077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.525 [2024-11-20 11:19:53.450249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.525 [2024-11-20 11:19:53.450280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:10.525 pt2 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.525 malloc3 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.525 [2024-11-20 11:19:53.515690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:10.525 [2024-11-20 11:19:53.515753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.525 [2024-11-20 11:19:53.515774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:10.525 [2024-11-20 11:19:53.515784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.525 [2024-11-20 11:19:53.517942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.525 [2024-11-20 11:19:53.517986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:10.525 pt3 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.525 [2024-11-20 11:19:53.527726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:10.525 [2024-11-20 11:19:53.529568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:10.525 [2024-11-20 11:19:53.529633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:10.525 [2024-11-20 11:19:53.529789] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:10.525 [2024-11-20 11:19:53.529803] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:10.525 [2024-11-20 11:19:53.530078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:10.525 [2024-11-20 11:19:53.530259] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:10.525 [2024-11-20 11:19:53.530269] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:10.525 [2024-11-20 11:19:53.530431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.525 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.525 "name": "raid_bdev1", 00:10:10.525 "uuid": "9a41d3cc-8197-479f-a024-fdc89d651db7", 00:10:10.525 "strip_size_kb": 64, 00:10:10.525 "state": "online", 00:10:10.525 "raid_level": "raid0", 00:10:10.525 "superblock": true, 00:10:10.525 "num_base_bdevs": 3, 00:10:10.525 "num_base_bdevs_discovered": 3, 00:10:10.525 "num_base_bdevs_operational": 3, 00:10:10.525 "base_bdevs_list": [ 00:10:10.525 { 00:10:10.525 "name": "pt1", 00:10:10.525 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:10.526 "is_configured": true, 00:10:10.526 "data_offset": 2048, 00:10:10.526 "data_size": 63488 00:10:10.526 }, 00:10:10.526 { 00:10:10.526 "name": "pt2", 00:10:10.526 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:10.526 "is_configured": true, 00:10:10.526 "data_offset": 2048, 00:10:10.526 "data_size": 63488 00:10:10.526 }, 00:10:10.526 { 00:10:10.526 "name": "pt3", 00:10:10.526 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:10.526 "is_configured": true, 00:10:10.526 "data_offset": 2048, 00:10:10.526 "data_size": 63488 00:10:10.526 } 00:10:10.526 ] 00:10:10.526 }' 00:10:10.526 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.526 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.095 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.095 [2024-11-20 11:19:54.011641] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:11.095 "name": "raid_bdev1", 00:10:11.095 "aliases": [ 00:10:11.095 "9a41d3cc-8197-479f-a024-fdc89d651db7" 00:10:11.095 ], 00:10:11.095 "product_name": "Raid Volume", 00:10:11.095 "block_size": 512, 00:10:11.095 "num_blocks": 190464, 00:10:11.095 "uuid": "9a41d3cc-8197-479f-a024-fdc89d651db7", 00:10:11.095 "assigned_rate_limits": { 00:10:11.095 "rw_ios_per_sec": 0, 00:10:11.095 "rw_mbytes_per_sec": 0, 00:10:11.095 "r_mbytes_per_sec": 0, 00:10:11.095 "w_mbytes_per_sec": 0 00:10:11.095 }, 00:10:11.095 "claimed": false, 00:10:11.095 "zoned": false, 00:10:11.095 "supported_io_types": { 00:10:11.095 "read": true, 00:10:11.095 "write": true, 00:10:11.095 "unmap": true, 00:10:11.095 "flush": true, 00:10:11.095 "reset": true, 00:10:11.095 "nvme_admin": false, 00:10:11.095 "nvme_io": false, 00:10:11.095 "nvme_io_md": false, 00:10:11.095 "write_zeroes": true, 00:10:11.095 "zcopy": false, 00:10:11.095 "get_zone_info": false, 00:10:11.095 "zone_management": false, 00:10:11.095 "zone_append": false, 00:10:11.095 "compare": false, 00:10:11.095 "compare_and_write": false, 00:10:11.095 "abort": false, 00:10:11.095 "seek_hole": false, 00:10:11.095 "seek_data": false, 00:10:11.095 "copy": false, 00:10:11.095 "nvme_iov_md": false 00:10:11.095 }, 00:10:11.095 "memory_domains": [ 00:10:11.095 { 00:10:11.095 "dma_device_id": "system", 00:10:11.095 "dma_device_type": 1 00:10:11.095 }, 00:10:11.095 { 00:10:11.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.095 "dma_device_type": 2 00:10:11.095 }, 00:10:11.095 { 00:10:11.095 "dma_device_id": "system", 00:10:11.095 "dma_device_type": 1 00:10:11.095 }, 00:10:11.095 { 00:10:11.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.095 "dma_device_type": 2 00:10:11.095 }, 00:10:11.095 { 00:10:11.095 "dma_device_id": "system", 00:10:11.095 "dma_device_type": 1 00:10:11.095 }, 00:10:11.095 { 00:10:11.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.095 "dma_device_type": 2 00:10:11.095 } 00:10:11.095 ], 00:10:11.095 "driver_specific": { 00:10:11.095 "raid": { 00:10:11.095 "uuid": "9a41d3cc-8197-479f-a024-fdc89d651db7", 00:10:11.095 "strip_size_kb": 64, 00:10:11.095 "state": "online", 00:10:11.095 "raid_level": "raid0", 00:10:11.095 "superblock": true, 00:10:11.095 "num_base_bdevs": 3, 00:10:11.095 "num_base_bdevs_discovered": 3, 00:10:11.095 "num_base_bdevs_operational": 3, 00:10:11.095 "base_bdevs_list": [ 00:10:11.095 { 00:10:11.095 "name": "pt1", 00:10:11.095 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.095 "is_configured": true, 00:10:11.095 "data_offset": 2048, 00:10:11.095 "data_size": 63488 00:10:11.095 }, 00:10:11.095 { 00:10:11.095 "name": "pt2", 00:10:11.095 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.095 "is_configured": true, 00:10:11.095 "data_offset": 2048, 00:10:11.095 "data_size": 63488 00:10:11.095 }, 00:10:11.095 { 00:10:11.095 "name": "pt3", 00:10:11.095 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:11.095 "is_configured": true, 00:10:11.095 "data_offset": 2048, 00:10:11.095 "data_size": 63488 00:10:11.095 } 00:10:11.095 ] 00:10:11.095 } 00:10:11.095 } 00:10:11.095 }' 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:11.095 pt2 00:10:11.095 pt3' 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:11.095 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.096 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.096 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.355 [2024-11-20 11:19:54.287064] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9a41d3cc-8197-479f-a024-fdc89d651db7 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9a41d3cc-8197-479f-a024-fdc89d651db7 ']' 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.355 [2024-11-20 11:19:54.330654] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:11.355 [2024-11-20 11:19:54.330681] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.355 [2024-11-20 11:19:54.330762] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.355 [2024-11-20 11:19:54.330824] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.355 [2024-11-20 11:19:54.330834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.355 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.614 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:11.614 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:11.614 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:11.614 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:11.614 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:11.614 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.614 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:11.614 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.614 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:11.614 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.614 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.614 [2024-11-20 11:19:54.482441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:11.614 [2024-11-20 11:19:54.484446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:11.614 [2024-11-20 11:19:54.484568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:11.614 [2024-11-20 11:19:54.484652] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:11.614 [2024-11-20 11:19:54.484755] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:11.615 [2024-11-20 11:19:54.484840] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:11.615 [2024-11-20 11:19:54.484895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:11.615 request: 00:10:11.615 [2024-11-20 11:19:54.484926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:11.615 { 00:10:11.615 "name": "raid_bdev1", 00:10:11.615 "raid_level": "raid0", 00:10:11.615 "base_bdevs": [ 00:10:11.615 "malloc1", 00:10:11.615 "malloc2", 00:10:11.615 "malloc3" 00:10:11.615 ], 00:10:11.615 "strip_size_kb": 64, 00:10:11.615 "superblock": false, 00:10:11.615 "method": "bdev_raid_create", 00:10:11.615 "req_id": 1 00:10:11.615 } 00:10:11.615 Got JSON-RPC error response 00:10:11.615 response: 00:10:11.615 { 00:10:11.615 "code": -17, 00:10:11.615 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:11.615 } 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.615 [2024-11-20 11:19:54.546294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:11.615 [2024-11-20 11:19:54.546395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.615 [2024-11-20 11:19:54.546433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:11.615 [2024-11-20 11:19:54.546500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.615 [2024-11-20 11:19:54.548853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.615 [2024-11-20 11:19:54.548927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:11.615 [2024-11-20 11:19:54.549073] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:11.615 [2024-11-20 11:19:54.549180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:11.615 pt1 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.615 "name": "raid_bdev1", 00:10:11.615 "uuid": "9a41d3cc-8197-479f-a024-fdc89d651db7", 00:10:11.615 "strip_size_kb": 64, 00:10:11.615 "state": "configuring", 00:10:11.615 "raid_level": "raid0", 00:10:11.615 "superblock": true, 00:10:11.615 "num_base_bdevs": 3, 00:10:11.615 "num_base_bdevs_discovered": 1, 00:10:11.615 "num_base_bdevs_operational": 3, 00:10:11.615 "base_bdevs_list": [ 00:10:11.615 { 00:10:11.615 "name": "pt1", 00:10:11.615 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.615 "is_configured": true, 00:10:11.615 "data_offset": 2048, 00:10:11.615 "data_size": 63488 00:10:11.615 }, 00:10:11.615 { 00:10:11.615 "name": null, 00:10:11.615 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.615 "is_configured": false, 00:10:11.615 "data_offset": 2048, 00:10:11.615 "data_size": 63488 00:10:11.615 }, 00:10:11.615 { 00:10:11.615 "name": null, 00:10:11.615 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:11.615 "is_configured": false, 00:10:11.615 "data_offset": 2048, 00:10:11.615 "data_size": 63488 00:10:11.615 } 00:10:11.615 ] 00:10:11.615 }' 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.615 11:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.255 [2024-11-20 11:19:55.021536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:12.255 [2024-11-20 11:19:55.021605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.255 [2024-11-20 11:19:55.021628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:12.255 [2024-11-20 11:19:55.021638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.255 [2024-11-20 11:19:55.022115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.255 [2024-11-20 11:19:55.022139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:12.255 [2024-11-20 11:19:55.022234] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:12.255 [2024-11-20 11:19:55.022264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:12.255 pt2 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.255 [2024-11-20 11:19:55.033519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.255 "name": "raid_bdev1", 00:10:12.255 "uuid": "9a41d3cc-8197-479f-a024-fdc89d651db7", 00:10:12.255 "strip_size_kb": 64, 00:10:12.255 "state": "configuring", 00:10:12.255 "raid_level": "raid0", 00:10:12.255 "superblock": true, 00:10:12.255 "num_base_bdevs": 3, 00:10:12.255 "num_base_bdevs_discovered": 1, 00:10:12.255 "num_base_bdevs_operational": 3, 00:10:12.255 "base_bdevs_list": [ 00:10:12.255 { 00:10:12.255 "name": "pt1", 00:10:12.255 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.255 "is_configured": true, 00:10:12.255 "data_offset": 2048, 00:10:12.255 "data_size": 63488 00:10:12.255 }, 00:10:12.255 { 00:10:12.255 "name": null, 00:10:12.255 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.255 "is_configured": false, 00:10:12.255 "data_offset": 0, 00:10:12.255 "data_size": 63488 00:10:12.255 }, 00:10:12.255 { 00:10:12.255 "name": null, 00:10:12.255 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:12.255 "is_configured": false, 00:10:12.255 "data_offset": 2048, 00:10:12.255 "data_size": 63488 00:10:12.255 } 00:10:12.255 ] 00:10:12.255 }' 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.255 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.515 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:12.515 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:12.515 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:12.515 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.515 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.515 [2024-11-20 11:19:55.460772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:12.515 [2024-11-20 11:19:55.460922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.515 [2024-11-20 11:19:55.460958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:12.515 [2024-11-20 11:19:55.461009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.515 [2024-11-20 11:19:55.461551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.515 [2024-11-20 11:19:55.461622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:12.515 [2024-11-20 11:19:55.461745] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:12.515 [2024-11-20 11:19:55.461805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:12.515 pt2 00:10:12.515 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.515 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:12.515 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:12.515 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:12.515 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.515 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.515 [2024-11-20 11:19:55.472711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:12.515 [2024-11-20 11:19:55.472796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.515 [2024-11-20 11:19:55.472827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:12.515 [2024-11-20 11:19:55.472860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.515 [2024-11-20 11:19:55.473250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.515 [2024-11-20 11:19:55.473309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:12.515 [2024-11-20 11:19:55.473392] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:12.515 [2024-11-20 11:19:55.473440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:12.516 [2024-11-20 11:19:55.473599] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:12.516 [2024-11-20 11:19:55.473640] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:12.516 [2024-11-20 11:19:55.473917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:12.516 [2024-11-20 11:19:55.474094] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:12.516 [2024-11-20 11:19:55.474132] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:12.516 [2024-11-20 11:19:55.474302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.516 pt3 00:10:12.516 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.516 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:12.516 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:12.516 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:12.516 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.516 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.516 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.516 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.516 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.516 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.516 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.516 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.516 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.516 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.516 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.516 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.516 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.516 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.516 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.516 "name": "raid_bdev1", 00:10:12.516 "uuid": "9a41d3cc-8197-479f-a024-fdc89d651db7", 00:10:12.516 "strip_size_kb": 64, 00:10:12.516 "state": "online", 00:10:12.516 "raid_level": "raid0", 00:10:12.516 "superblock": true, 00:10:12.516 "num_base_bdevs": 3, 00:10:12.516 "num_base_bdevs_discovered": 3, 00:10:12.516 "num_base_bdevs_operational": 3, 00:10:12.516 "base_bdevs_list": [ 00:10:12.516 { 00:10:12.516 "name": "pt1", 00:10:12.516 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.516 "is_configured": true, 00:10:12.516 "data_offset": 2048, 00:10:12.516 "data_size": 63488 00:10:12.516 }, 00:10:12.516 { 00:10:12.516 "name": "pt2", 00:10:12.516 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.516 "is_configured": true, 00:10:12.516 "data_offset": 2048, 00:10:12.516 "data_size": 63488 00:10:12.516 }, 00:10:12.516 { 00:10:12.516 "name": "pt3", 00:10:12.516 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:12.516 "is_configured": true, 00:10:12.516 "data_offset": 2048, 00:10:12.516 "data_size": 63488 00:10:12.516 } 00:10:12.516 ] 00:10:12.516 }' 00:10:12.516 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.516 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.085 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:13.085 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:13.085 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:13.085 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:13.085 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:13.085 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:13.085 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:13.085 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:13.085 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.085 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.085 [2024-11-20 11:19:55.948250] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.085 11:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.085 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:13.085 "name": "raid_bdev1", 00:10:13.085 "aliases": [ 00:10:13.085 "9a41d3cc-8197-479f-a024-fdc89d651db7" 00:10:13.085 ], 00:10:13.085 "product_name": "Raid Volume", 00:10:13.085 "block_size": 512, 00:10:13.085 "num_blocks": 190464, 00:10:13.086 "uuid": "9a41d3cc-8197-479f-a024-fdc89d651db7", 00:10:13.086 "assigned_rate_limits": { 00:10:13.086 "rw_ios_per_sec": 0, 00:10:13.086 "rw_mbytes_per_sec": 0, 00:10:13.086 "r_mbytes_per_sec": 0, 00:10:13.086 "w_mbytes_per_sec": 0 00:10:13.086 }, 00:10:13.086 "claimed": false, 00:10:13.086 "zoned": false, 00:10:13.086 "supported_io_types": { 00:10:13.086 "read": true, 00:10:13.086 "write": true, 00:10:13.086 "unmap": true, 00:10:13.086 "flush": true, 00:10:13.086 "reset": true, 00:10:13.086 "nvme_admin": false, 00:10:13.086 "nvme_io": false, 00:10:13.086 "nvme_io_md": false, 00:10:13.086 "write_zeroes": true, 00:10:13.086 "zcopy": false, 00:10:13.086 "get_zone_info": false, 00:10:13.086 "zone_management": false, 00:10:13.086 "zone_append": false, 00:10:13.086 "compare": false, 00:10:13.086 "compare_and_write": false, 00:10:13.086 "abort": false, 00:10:13.086 "seek_hole": false, 00:10:13.086 "seek_data": false, 00:10:13.086 "copy": false, 00:10:13.086 "nvme_iov_md": false 00:10:13.086 }, 00:10:13.086 "memory_domains": [ 00:10:13.086 { 00:10:13.086 "dma_device_id": "system", 00:10:13.086 "dma_device_type": 1 00:10:13.086 }, 00:10:13.086 { 00:10:13.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.086 "dma_device_type": 2 00:10:13.086 }, 00:10:13.086 { 00:10:13.086 "dma_device_id": "system", 00:10:13.086 "dma_device_type": 1 00:10:13.086 }, 00:10:13.086 { 00:10:13.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.086 "dma_device_type": 2 00:10:13.086 }, 00:10:13.086 { 00:10:13.086 "dma_device_id": "system", 00:10:13.086 "dma_device_type": 1 00:10:13.086 }, 00:10:13.086 { 00:10:13.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.086 "dma_device_type": 2 00:10:13.086 } 00:10:13.086 ], 00:10:13.086 "driver_specific": { 00:10:13.086 "raid": { 00:10:13.086 "uuid": "9a41d3cc-8197-479f-a024-fdc89d651db7", 00:10:13.086 "strip_size_kb": 64, 00:10:13.086 "state": "online", 00:10:13.086 "raid_level": "raid0", 00:10:13.086 "superblock": true, 00:10:13.086 "num_base_bdevs": 3, 00:10:13.086 "num_base_bdevs_discovered": 3, 00:10:13.086 "num_base_bdevs_operational": 3, 00:10:13.086 "base_bdevs_list": [ 00:10:13.086 { 00:10:13.086 "name": "pt1", 00:10:13.086 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:13.086 "is_configured": true, 00:10:13.086 "data_offset": 2048, 00:10:13.086 "data_size": 63488 00:10:13.086 }, 00:10:13.086 { 00:10:13.086 "name": "pt2", 00:10:13.086 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.086 "is_configured": true, 00:10:13.086 "data_offset": 2048, 00:10:13.086 "data_size": 63488 00:10:13.086 }, 00:10:13.086 { 00:10:13.086 "name": "pt3", 00:10:13.086 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.086 "is_configured": true, 00:10:13.086 "data_offset": 2048, 00:10:13.086 "data_size": 63488 00:10:13.086 } 00:10:13.086 ] 00:10:13.086 } 00:10:13.086 } 00:10:13.086 }' 00:10:13.086 11:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.086 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:13.086 pt2 00:10:13.086 pt3' 00:10:13.086 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.086 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:13.086 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.086 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:13.086 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.086 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.086 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.086 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.086 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.086 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.086 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.086 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:13.086 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.086 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.086 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.086 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.086 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.086 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.086 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.086 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:13.086 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.086 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.086 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.345 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.345 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.345 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.345 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:13.345 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:13.345 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.345 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.345 [2024-11-20 11:19:56.243778] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.345 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.345 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9a41d3cc-8197-479f-a024-fdc89d651db7 '!=' 9a41d3cc-8197-479f-a024-fdc89d651db7 ']' 00:10:13.345 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:13.345 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:13.345 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:13.345 11:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65190 00:10:13.345 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65190 ']' 00:10:13.345 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65190 00:10:13.345 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:13.345 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.345 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65190 00:10:13.345 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.345 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.345 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65190' 00:10:13.345 killing process with pid 65190 00:10:13.345 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65190 00:10:13.345 11:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65190 00:10:13.345 [2024-11-20 11:19:56.331015] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:13.345 [2024-11-20 11:19:56.331130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.345 [2024-11-20 11:19:56.331252] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.345 [2024-11-20 11:19:56.331308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:13.604 [2024-11-20 11:19:56.650022] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:14.981 11:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:14.981 00:10:14.981 real 0m5.409s 00:10:14.981 user 0m7.835s 00:10:14.981 sys 0m0.861s 00:10:14.981 11:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.981 11:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.981 ************************************ 00:10:14.981 END TEST raid_superblock_test 00:10:14.981 ************************************ 00:10:14.981 11:19:57 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:10:14.981 11:19:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:14.981 11:19:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.981 11:19:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:14.981 ************************************ 00:10:14.981 START TEST raid_read_error_test 00:10:14.981 ************************************ 00:10:14.981 11:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:10:14.981 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:14.981 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:14.981 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:14.981 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:14.981 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.981 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:14.981 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:14.981 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.981 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:14.981 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:14.981 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.981 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:14.981 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:14.981 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.981 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:14.981 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:14.981 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:14.981 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:14.981 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:14.981 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:14.982 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:14.982 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:14.982 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:14.982 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:14.982 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:14.982 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YaSvCLeSO3 00:10:14.982 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65443 00:10:14.982 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:14.982 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65443 00:10:14.982 11:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65443 ']' 00:10:14.982 11:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.982 11:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.982 11:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.982 11:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.982 11:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.982 [2024-11-20 11:19:57.978640] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:10:14.982 [2024-11-20 11:19:57.978831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65443 ] 00:10:15.242 [2024-11-20 11:19:58.154761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.242 [2024-11-20 11:19:58.279447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.502 [2024-11-20 11:19:58.488961] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.502 [2024-11-20 11:19:58.488997] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.762 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.762 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:15.762 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:15.762 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:15.762 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.762 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.022 BaseBdev1_malloc 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.022 true 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.022 [2024-11-20 11:19:58.895661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:16.022 [2024-11-20 11:19:58.895719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.022 [2024-11-20 11:19:58.895739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:16.022 [2024-11-20 11:19:58.895750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.022 [2024-11-20 11:19:58.897815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.022 [2024-11-20 11:19:58.897926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:16.022 BaseBdev1 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.022 BaseBdev2_malloc 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.022 true 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.022 [2024-11-20 11:19:58.963680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:16.022 [2024-11-20 11:19:58.963791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.022 [2024-11-20 11:19:58.963814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:16.022 [2024-11-20 11:19:58.963826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.022 [2024-11-20 11:19:58.966168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.022 [2024-11-20 11:19:58.966208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:16.022 BaseBdev2 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.022 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.022 BaseBdev3_malloc 00:10:16.022 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.022 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:16.022 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.022 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.022 true 00:10:16.022 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.022 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:16.022 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.022 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.022 [2024-11-20 11:19:59.042689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:16.022 [2024-11-20 11:19:59.042745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.022 [2024-11-20 11:19:59.042765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:16.022 [2024-11-20 11:19:59.042776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.022 [2024-11-20 11:19:59.045093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.022 [2024-11-20 11:19:59.045136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:16.022 BaseBdev3 00:10:16.022 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.022 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:16.022 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.022 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.022 [2024-11-20 11:19:59.054745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.022 [2024-11-20 11:19:59.056783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.022 [2024-11-20 11:19:59.056872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.022 [2024-11-20 11:19:59.057080] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:16.022 [2024-11-20 11:19:59.057096] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:16.022 [2024-11-20 11:19:59.057378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:16.022 [2024-11-20 11:19:59.057584] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:16.022 [2024-11-20 11:19:59.057600] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:16.022 [2024-11-20 11:19:59.057778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.022 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.022 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:16.022 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.022 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.022 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.022 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.022 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.023 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.023 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.023 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.023 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.023 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.023 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.023 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.023 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.023 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.023 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.023 "name": "raid_bdev1", 00:10:16.023 "uuid": "c0845b4d-689c-485b-b5d2-99543f0991af", 00:10:16.023 "strip_size_kb": 64, 00:10:16.023 "state": "online", 00:10:16.023 "raid_level": "raid0", 00:10:16.023 "superblock": true, 00:10:16.023 "num_base_bdevs": 3, 00:10:16.023 "num_base_bdevs_discovered": 3, 00:10:16.023 "num_base_bdevs_operational": 3, 00:10:16.023 "base_bdevs_list": [ 00:10:16.023 { 00:10:16.023 "name": "BaseBdev1", 00:10:16.023 "uuid": "caf45952-4e8c-5356-8da7-39ae041074ac", 00:10:16.023 "is_configured": true, 00:10:16.023 "data_offset": 2048, 00:10:16.023 "data_size": 63488 00:10:16.023 }, 00:10:16.023 { 00:10:16.023 "name": "BaseBdev2", 00:10:16.023 "uuid": "17617b52-baf7-522e-8120-9afc078ae180", 00:10:16.023 "is_configured": true, 00:10:16.023 "data_offset": 2048, 00:10:16.023 "data_size": 63488 00:10:16.023 }, 00:10:16.023 { 00:10:16.023 "name": "BaseBdev3", 00:10:16.023 "uuid": "16b141c2-ec6d-50b0-8fae-39b3d499b3a0", 00:10:16.023 "is_configured": true, 00:10:16.023 "data_offset": 2048, 00:10:16.023 "data_size": 63488 00:10:16.023 } 00:10:16.023 ] 00:10:16.023 }' 00:10:16.023 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.023 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.589 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:16.589 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:16.589 [2024-11-20 11:19:59.571026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:17.527 11:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:17.527 11:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.527 11:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.528 11:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.528 11:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:17.528 11:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:17.528 11:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:17.528 11:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:17.528 11:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.528 11:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.528 11:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.528 11:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.528 11:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.528 11:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.528 11:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.528 11:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.528 11:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.528 11:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.528 11:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.528 11:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.528 11:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.528 11:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.528 11:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.528 "name": "raid_bdev1", 00:10:17.528 "uuid": "c0845b4d-689c-485b-b5d2-99543f0991af", 00:10:17.528 "strip_size_kb": 64, 00:10:17.528 "state": "online", 00:10:17.528 "raid_level": "raid0", 00:10:17.528 "superblock": true, 00:10:17.528 "num_base_bdevs": 3, 00:10:17.528 "num_base_bdevs_discovered": 3, 00:10:17.528 "num_base_bdevs_operational": 3, 00:10:17.528 "base_bdevs_list": [ 00:10:17.528 { 00:10:17.528 "name": "BaseBdev1", 00:10:17.528 "uuid": "caf45952-4e8c-5356-8da7-39ae041074ac", 00:10:17.528 "is_configured": true, 00:10:17.528 "data_offset": 2048, 00:10:17.528 "data_size": 63488 00:10:17.528 }, 00:10:17.528 { 00:10:17.528 "name": "BaseBdev2", 00:10:17.528 "uuid": "17617b52-baf7-522e-8120-9afc078ae180", 00:10:17.528 "is_configured": true, 00:10:17.528 "data_offset": 2048, 00:10:17.528 "data_size": 63488 00:10:17.528 }, 00:10:17.528 { 00:10:17.528 "name": "BaseBdev3", 00:10:17.528 "uuid": "16b141c2-ec6d-50b0-8fae-39b3d499b3a0", 00:10:17.528 "is_configured": true, 00:10:17.528 "data_offset": 2048, 00:10:17.528 "data_size": 63488 00:10:17.528 } 00:10:17.528 ] 00:10:17.528 }' 00:10:17.528 11:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.528 11:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.099 11:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:18.099 11:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.099 11:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.099 [2024-11-20 11:20:00.951879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:18.099 [2024-11-20 11:20:00.951917] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:18.099 [2024-11-20 11:20:00.954742] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.099 [2024-11-20 11:20:00.954795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.099 [2024-11-20 11:20:00.954834] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:18.099 [2024-11-20 11:20:00.954842] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:18.099 { 00:10:18.099 "results": [ 00:10:18.099 { 00:10:18.099 "job": "raid_bdev1", 00:10:18.099 "core_mask": "0x1", 00:10:18.099 "workload": "randrw", 00:10:18.099 "percentage": 50, 00:10:18.099 "status": "finished", 00:10:18.099 "queue_depth": 1, 00:10:18.099 "io_size": 131072, 00:10:18.099 "runtime": 1.381627, 00:10:18.099 "iops": 14784.742915417837, 00:10:18.099 "mibps": 1848.0928644272296, 00:10:18.099 "io_failed": 1, 00:10:18.099 "io_timeout": 0, 00:10:18.099 "avg_latency_us": 93.98160278340457, 00:10:18.099 "min_latency_us": 26.494323144104804, 00:10:18.099 "max_latency_us": 1752.8733624454148 00:10:18.099 } 00:10:18.099 ], 00:10:18.099 "core_count": 1 00:10:18.099 } 00:10:18.099 11:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.099 11:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65443 00:10:18.099 11:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65443 ']' 00:10:18.099 11:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65443 00:10:18.099 11:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:18.099 11:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:18.099 11:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65443 00:10:18.099 killing process with pid 65443 00:10:18.099 11:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:18.099 11:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:18.099 11:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65443' 00:10:18.099 11:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65443 00:10:18.099 [2024-11-20 11:20:00.986645] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:18.099 11:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65443 00:10:18.359 [2024-11-20 11:20:01.230691] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:19.738 11:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YaSvCLeSO3 00:10:19.738 11:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:19.738 11:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:19.738 11:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:19.739 11:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:19.739 11:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:19.739 11:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:19.739 11:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:19.739 00:10:19.739 real 0m4.609s 00:10:19.739 user 0m5.488s 00:10:19.739 sys 0m0.542s 00:10:19.739 11:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.739 11:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.739 ************************************ 00:10:19.739 END TEST raid_read_error_test 00:10:19.739 ************************************ 00:10:19.739 11:20:02 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:10:19.739 11:20:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:19.739 11:20:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.739 11:20:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:19.739 ************************************ 00:10:19.739 START TEST raid_write_error_test 00:10:19.739 ************************************ 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pVk9P6NPYm 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65588 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65588 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65588 ']' 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.739 11:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.739 [2024-11-20 11:20:02.645895] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:10:19.739 [2024-11-20 11:20:02.646570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65588 ] 00:10:19.739 [2024-11-20 11:20:02.831889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.998 [2024-11-20 11:20:02.951907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.258 [2024-11-20 11:20:03.165911] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.258 [2024-11-20 11:20:03.165983] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.517 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.517 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:20.517 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.517 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:20.517 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.517 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.517 BaseBdev1_malloc 00:10:20.517 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.517 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:20.517 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.517 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.517 true 00:10:20.517 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.517 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:20.517 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.517 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.517 [2024-11-20 11:20:03.575849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:20.517 [2024-11-20 11:20:03.575921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.517 [2024-11-20 11:20:03.575945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:20.517 [2024-11-20 11:20:03.575958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.517 [2024-11-20 11:20:03.578211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.517 [2024-11-20 11:20:03.578252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:20.517 BaseBdev1 00:10:20.517 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.517 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.517 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:20.517 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.517 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.517 BaseBdev2_malloc 00:10:20.517 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.517 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:20.517 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.517 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.776 true 00:10:20.776 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.777 [2024-11-20 11:20:03.644220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:20.777 [2024-11-20 11:20:03.644289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.777 [2024-11-20 11:20:03.644311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:20.777 [2024-11-20 11:20:03.644323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.777 [2024-11-20 11:20:03.646715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.777 [2024-11-20 11:20:03.646772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:20.777 BaseBdev2 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.777 BaseBdev3_malloc 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.777 true 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.777 [2024-11-20 11:20:03.724574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:20.777 [2024-11-20 11:20:03.724640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.777 [2024-11-20 11:20:03.724677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:20.777 [2024-11-20 11:20:03.724700] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.777 [2024-11-20 11:20:03.726815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.777 [2024-11-20 11:20:03.726853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:20.777 BaseBdev3 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.777 [2024-11-20 11:20:03.736639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:20.777 [2024-11-20 11:20:03.738630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.777 [2024-11-20 11:20:03.738738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:20.777 [2024-11-20 11:20:03.738955] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:20.777 [2024-11-20 11:20:03.738979] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:20.777 [2024-11-20 11:20:03.739270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:20.777 [2024-11-20 11:20:03.739469] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:20.777 [2024-11-20 11:20:03.739492] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:20.777 [2024-11-20 11:20:03.739690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.777 "name": "raid_bdev1", 00:10:20.777 "uuid": "7a44f2ce-fdd4-4029-99dd-dbc9b1e444dc", 00:10:20.777 "strip_size_kb": 64, 00:10:20.777 "state": "online", 00:10:20.777 "raid_level": "raid0", 00:10:20.777 "superblock": true, 00:10:20.777 "num_base_bdevs": 3, 00:10:20.777 "num_base_bdevs_discovered": 3, 00:10:20.777 "num_base_bdevs_operational": 3, 00:10:20.777 "base_bdevs_list": [ 00:10:20.777 { 00:10:20.777 "name": "BaseBdev1", 00:10:20.777 "uuid": "b64e65f7-313d-5256-9bda-d243342ff959", 00:10:20.777 "is_configured": true, 00:10:20.777 "data_offset": 2048, 00:10:20.777 "data_size": 63488 00:10:20.777 }, 00:10:20.777 { 00:10:20.777 "name": "BaseBdev2", 00:10:20.777 "uuid": "08feb820-194d-501f-bb92-86dd61df3b3a", 00:10:20.777 "is_configured": true, 00:10:20.777 "data_offset": 2048, 00:10:20.777 "data_size": 63488 00:10:20.777 }, 00:10:20.777 { 00:10:20.777 "name": "BaseBdev3", 00:10:20.777 "uuid": "4cc9ceaa-02d3-5a44-b783-e5ad05742530", 00:10:20.777 "is_configured": true, 00:10:20.777 "data_offset": 2048, 00:10:20.777 "data_size": 63488 00:10:20.777 } 00:10:20.777 ] 00:10:20.777 }' 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.777 11:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.036 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:21.036 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:21.294 [2024-11-20 11:20:04.209347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:22.232 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:22.232 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.233 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.233 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.233 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:22.233 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:22.233 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:22.233 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:22.233 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.233 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.233 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.233 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.233 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.233 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.233 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.233 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.233 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.233 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.233 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.233 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.233 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.233 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.233 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.233 "name": "raid_bdev1", 00:10:22.233 "uuid": "7a44f2ce-fdd4-4029-99dd-dbc9b1e444dc", 00:10:22.233 "strip_size_kb": 64, 00:10:22.233 "state": "online", 00:10:22.233 "raid_level": "raid0", 00:10:22.233 "superblock": true, 00:10:22.233 "num_base_bdevs": 3, 00:10:22.233 "num_base_bdevs_discovered": 3, 00:10:22.233 "num_base_bdevs_operational": 3, 00:10:22.233 "base_bdevs_list": [ 00:10:22.233 { 00:10:22.233 "name": "BaseBdev1", 00:10:22.233 "uuid": "b64e65f7-313d-5256-9bda-d243342ff959", 00:10:22.233 "is_configured": true, 00:10:22.233 "data_offset": 2048, 00:10:22.233 "data_size": 63488 00:10:22.233 }, 00:10:22.233 { 00:10:22.233 "name": "BaseBdev2", 00:10:22.233 "uuid": "08feb820-194d-501f-bb92-86dd61df3b3a", 00:10:22.233 "is_configured": true, 00:10:22.233 "data_offset": 2048, 00:10:22.233 "data_size": 63488 00:10:22.233 }, 00:10:22.233 { 00:10:22.233 "name": "BaseBdev3", 00:10:22.233 "uuid": "4cc9ceaa-02d3-5a44-b783-e5ad05742530", 00:10:22.233 "is_configured": true, 00:10:22.233 "data_offset": 2048, 00:10:22.233 "data_size": 63488 00:10:22.233 } 00:10:22.233 ] 00:10:22.233 }' 00:10:22.233 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.233 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.492 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:22.492 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.492 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.492 [2024-11-20 11:20:05.573335] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:22.492 [2024-11-20 11:20:05.573377] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.492 [2024-11-20 11:20:05.576295] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.492 [2024-11-20 11:20:05.576350] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.492 [2024-11-20 11:20:05.576392] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.492 [2024-11-20 11:20:05.576402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:22.492 { 00:10:22.492 "results": [ 00:10:22.492 { 00:10:22.492 "job": "raid_bdev1", 00:10:22.492 "core_mask": "0x1", 00:10:22.492 "workload": "randrw", 00:10:22.492 "percentage": 50, 00:10:22.492 "status": "finished", 00:10:22.492 "queue_depth": 1, 00:10:22.492 "io_size": 131072, 00:10:22.492 "runtime": 1.364689, 00:10:22.492 "iops": 14993.159613655565, 00:10:22.492 "mibps": 1874.1449517069457, 00:10:22.492 "io_failed": 1, 00:10:22.492 "io_timeout": 0, 00:10:22.492 "avg_latency_us": 92.6833213040767, 00:10:22.492 "min_latency_us": 26.382532751091702, 00:10:22.492 "max_latency_us": 1345.0620087336245 00:10:22.492 } 00:10:22.492 ], 00:10:22.492 "core_count": 1 00:10:22.492 } 00:10:22.492 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.492 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65588 00:10:22.492 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65588 ']' 00:10:22.492 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65588 00:10:22.492 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:22.492 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.492 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65588 00:10:22.752 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.752 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.752 killing process with pid 65588 00:10:22.752 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65588' 00:10:22.752 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65588 00:10:22.752 [2024-11-20 11:20:05.618796] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.752 11:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65588 00:10:22.752 [2024-11-20 11:20:05.854675] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:24.142 11:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pVk9P6NPYm 00:10:24.142 11:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:24.142 11:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:24.142 11:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:24.142 11:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:24.142 11:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:24.142 11:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:24.142 11:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:24.142 00:10:24.142 real 0m4.540s 00:10:24.142 user 0m5.353s 00:10:24.142 sys 0m0.556s 00:10:24.142 11:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.142 11:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.142 ************************************ 00:10:24.142 END TEST raid_write_error_test 00:10:24.142 ************************************ 00:10:24.142 11:20:07 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:24.142 11:20:07 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:10:24.142 11:20:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:24.142 11:20:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.142 11:20:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:24.142 ************************************ 00:10:24.142 START TEST raid_state_function_test 00:10:24.142 ************************************ 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65734 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:24.142 Process raid pid: 65734 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65734' 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65734 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65734 ']' 00:10:24.142 11:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.143 11:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.143 11:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.143 11:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.143 11:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.143 [2024-11-20 11:20:07.244718] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:10:24.143 [2024-11-20 11:20:07.244842] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.402 [2024-11-20 11:20:07.422023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.662 [2024-11-20 11:20:07.550220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.662 [2024-11-20 11:20:07.771026] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.662 [2024-11-20 11:20:07.771078] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.232 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.232 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:25.232 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:25.232 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.232 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.232 [2024-11-20 11:20:08.110661] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:25.232 [2024-11-20 11:20:08.110720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:25.232 [2024-11-20 11:20:08.110734] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.232 [2024-11-20 11:20:08.110744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.232 [2024-11-20 11:20:08.110750] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:25.232 [2024-11-20 11:20:08.110759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:25.232 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.232 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:25.232 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.232 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.232 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.232 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.232 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.232 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.232 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.232 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.232 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.232 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.232 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.232 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.232 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.232 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.232 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.232 "name": "Existed_Raid", 00:10:25.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.232 "strip_size_kb": 64, 00:10:25.232 "state": "configuring", 00:10:25.232 "raid_level": "concat", 00:10:25.232 "superblock": false, 00:10:25.232 "num_base_bdevs": 3, 00:10:25.232 "num_base_bdevs_discovered": 0, 00:10:25.232 "num_base_bdevs_operational": 3, 00:10:25.232 "base_bdevs_list": [ 00:10:25.232 { 00:10:25.232 "name": "BaseBdev1", 00:10:25.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.232 "is_configured": false, 00:10:25.232 "data_offset": 0, 00:10:25.232 "data_size": 0 00:10:25.232 }, 00:10:25.232 { 00:10:25.232 "name": "BaseBdev2", 00:10:25.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.232 "is_configured": false, 00:10:25.232 "data_offset": 0, 00:10:25.232 "data_size": 0 00:10:25.232 }, 00:10:25.232 { 00:10:25.232 "name": "BaseBdev3", 00:10:25.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.232 "is_configured": false, 00:10:25.232 "data_offset": 0, 00:10:25.232 "data_size": 0 00:10:25.232 } 00:10:25.232 ] 00:10:25.232 }' 00:10:25.232 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.232 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.493 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:25.493 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.493 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.493 [2024-11-20 11:20:08.601797] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:25.493 [2024-11-20 11:20:08.601843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:25.493 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.493 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:25.493 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.493 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.753 [2024-11-20 11:20:08.609779] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:25.753 [2024-11-20 11:20:08.609847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:25.753 [2024-11-20 11:20:08.609858] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.753 [2024-11-20 11:20:08.609869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.753 [2024-11-20 11:20:08.609877] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:25.753 [2024-11-20 11:20:08.609888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:25.753 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.753 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:25.753 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.753 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.753 [2024-11-20 11:20:08.656060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.753 BaseBdev1 00:10:25.753 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.753 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:25.753 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:25.753 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:25.753 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:25.753 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:25.753 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:25.753 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:25.753 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.753 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.753 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.753 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:25.753 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.753 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.753 [ 00:10:25.753 { 00:10:25.753 "name": "BaseBdev1", 00:10:25.753 "aliases": [ 00:10:25.753 "7a79e5fb-8a11-4b2f-b172-99ca28d1de45" 00:10:25.753 ], 00:10:25.753 "product_name": "Malloc disk", 00:10:25.753 "block_size": 512, 00:10:25.753 "num_blocks": 65536, 00:10:25.753 "uuid": "7a79e5fb-8a11-4b2f-b172-99ca28d1de45", 00:10:25.753 "assigned_rate_limits": { 00:10:25.753 "rw_ios_per_sec": 0, 00:10:25.753 "rw_mbytes_per_sec": 0, 00:10:25.753 "r_mbytes_per_sec": 0, 00:10:25.753 "w_mbytes_per_sec": 0 00:10:25.753 }, 00:10:25.753 "claimed": true, 00:10:25.753 "claim_type": "exclusive_write", 00:10:25.753 "zoned": false, 00:10:25.753 "supported_io_types": { 00:10:25.753 "read": true, 00:10:25.753 "write": true, 00:10:25.753 "unmap": true, 00:10:25.753 "flush": true, 00:10:25.753 "reset": true, 00:10:25.753 "nvme_admin": false, 00:10:25.753 "nvme_io": false, 00:10:25.753 "nvme_io_md": false, 00:10:25.753 "write_zeroes": true, 00:10:25.753 "zcopy": true, 00:10:25.753 "get_zone_info": false, 00:10:25.753 "zone_management": false, 00:10:25.753 "zone_append": false, 00:10:25.753 "compare": false, 00:10:25.753 "compare_and_write": false, 00:10:25.753 "abort": true, 00:10:25.753 "seek_hole": false, 00:10:25.753 "seek_data": false, 00:10:25.753 "copy": true, 00:10:25.753 "nvme_iov_md": false 00:10:25.753 }, 00:10:25.753 "memory_domains": [ 00:10:25.753 { 00:10:25.753 "dma_device_id": "system", 00:10:25.753 "dma_device_type": 1 00:10:25.753 }, 00:10:25.753 { 00:10:25.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.753 "dma_device_type": 2 00:10:25.753 } 00:10:25.753 ], 00:10:25.753 "driver_specific": {} 00:10:25.753 } 00:10:25.753 ] 00:10:25.753 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.753 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:25.753 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:25.754 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.754 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.754 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.754 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.754 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.754 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.754 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.754 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.754 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.754 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.754 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.754 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.754 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.754 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.754 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.754 "name": "Existed_Raid", 00:10:25.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.754 "strip_size_kb": 64, 00:10:25.754 "state": "configuring", 00:10:25.754 "raid_level": "concat", 00:10:25.754 "superblock": false, 00:10:25.754 "num_base_bdevs": 3, 00:10:25.754 "num_base_bdevs_discovered": 1, 00:10:25.754 "num_base_bdevs_operational": 3, 00:10:25.754 "base_bdevs_list": [ 00:10:25.754 { 00:10:25.754 "name": "BaseBdev1", 00:10:25.754 "uuid": "7a79e5fb-8a11-4b2f-b172-99ca28d1de45", 00:10:25.754 "is_configured": true, 00:10:25.754 "data_offset": 0, 00:10:25.754 "data_size": 65536 00:10:25.754 }, 00:10:25.754 { 00:10:25.754 "name": "BaseBdev2", 00:10:25.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.754 "is_configured": false, 00:10:25.754 "data_offset": 0, 00:10:25.754 "data_size": 0 00:10:25.754 }, 00:10:25.754 { 00:10:25.754 "name": "BaseBdev3", 00:10:25.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.754 "is_configured": false, 00:10:25.754 "data_offset": 0, 00:10:25.754 "data_size": 0 00:10:25.754 } 00:10:25.754 ] 00:10:25.754 }' 00:10:25.754 11:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.754 11:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.322 [2024-11-20 11:20:09.183310] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.322 [2024-11-20 11:20:09.183376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.322 [2024-11-20 11:20:09.195351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.322 [2024-11-20 11:20:09.197161] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.322 [2024-11-20 11:20:09.197204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.322 [2024-11-20 11:20:09.197214] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.322 [2024-11-20 11:20:09.197223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.322 "name": "Existed_Raid", 00:10:26.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.322 "strip_size_kb": 64, 00:10:26.322 "state": "configuring", 00:10:26.322 "raid_level": "concat", 00:10:26.322 "superblock": false, 00:10:26.322 "num_base_bdevs": 3, 00:10:26.322 "num_base_bdevs_discovered": 1, 00:10:26.322 "num_base_bdevs_operational": 3, 00:10:26.322 "base_bdevs_list": [ 00:10:26.322 { 00:10:26.322 "name": "BaseBdev1", 00:10:26.322 "uuid": "7a79e5fb-8a11-4b2f-b172-99ca28d1de45", 00:10:26.322 "is_configured": true, 00:10:26.322 "data_offset": 0, 00:10:26.322 "data_size": 65536 00:10:26.322 }, 00:10:26.322 { 00:10:26.322 "name": "BaseBdev2", 00:10:26.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.322 "is_configured": false, 00:10:26.322 "data_offset": 0, 00:10:26.322 "data_size": 0 00:10:26.322 }, 00:10:26.322 { 00:10:26.322 "name": "BaseBdev3", 00:10:26.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.322 "is_configured": false, 00:10:26.322 "data_offset": 0, 00:10:26.322 "data_size": 0 00:10:26.322 } 00:10:26.322 ] 00:10:26.322 }' 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.322 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.589 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:26.589 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.589 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.862 [2024-11-20 11:20:09.700746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:26.862 BaseBdev2 00:10:26.862 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.862 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:26.862 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:26.862 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.862 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:26.862 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.862 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.862 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.862 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.862 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.862 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.862 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:26.862 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.862 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.862 [ 00:10:26.862 { 00:10:26.862 "name": "BaseBdev2", 00:10:26.862 "aliases": [ 00:10:26.862 "c027c4c2-6a11-4539-b844-c6414ba3005d" 00:10:26.862 ], 00:10:26.862 "product_name": "Malloc disk", 00:10:26.862 "block_size": 512, 00:10:26.862 "num_blocks": 65536, 00:10:26.862 "uuid": "c027c4c2-6a11-4539-b844-c6414ba3005d", 00:10:26.862 "assigned_rate_limits": { 00:10:26.862 "rw_ios_per_sec": 0, 00:10:26.862 "rw_mbytes_per_sec": 0, 00:10:26.862 "r_mbytes_per_sec": 0, 00:10:26.862 "w_mbytes_per_sec": 0 00:10:26.862 }, 00:10:26.862 "claimed": true, 00:10:26.862 "claim_type": "exclusive_write", 00:10:26.862 "zoned": false, 00:10:26.862 "supported_io_types": { 00:10:26.862 "read": true, 00:10:26.862 "write": true, 00:10:26.862 "unmap": true, 00:10:26.862 "flush": true, 00:10:26.862 "reset": true, 00:10:26.862 "nvme_admin": false, 00:10:26.862 "nvme_io": false, 00:10:26.862 "nvme_io_md": false, 00:10:26.862 "write_zeroes": true, 00:10:26.862 "zcopy": true, 00:10:26.862 "get_zone_info": false, 00:10:26.862 "zone_management": false, 00:10:26.862 "zone_append": false, 00:10:26.862 "compare": false, 00:10:26.862 "compare_and_write": false, 00:10:26.862 "abort": true, 00:10:26.862 "seek_hole": false, 00:10:26.862 "seek_data": false, 00:10:26.862 "copy": true, 00:10:26.862 "nvme_iov_md": false 00:10:26.862 }, 00:10:26.862 "memory_domains": [ 00:10:26.862 { 00:10:26.862 "dma_device_id": "system", 00:10:26.862 "dma_device_type": 1 00:10:26.862 }, 00:10:26.862 { 00:10:26.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.862 "dma_device_type": 2 00:10:26.863 } 00:10:26.863 ], 00:10:26.863 "driver_specific": {} 00:10:26.863 } 00:10:26.863 ] 00:10:26.863 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.863 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:26.863 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:26.863 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:26.863 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:26.863 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.863 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.863 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:26.863 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.863 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.863 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.863 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.863 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.863 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.863 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.863 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.863 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.863 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.863 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.863 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.863 "name": "Existed_Raid", 00:10:26.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.863 "strip_size_kb": 64, 00:10:26.863 "state": "configuring", 00:10:26.863 "raid_level": "concat", 00:10:26.863 "superblock": false, 00:10:26.863 "num_base_bdevs": 3, 00:10:26.863 "num_base_bdevs_discovered": 2, 00:10:26.863 "num_base_bdevs_operational": 3, 00:10:26.863 "base_bdevs_list": [ 00:10:26.863 { 00:10:26.863 "name": "BaseBdev1", 00:10:26.863 "uuid": "7a79e5fb-8a11-4b2f-b172-99ca28d1de45", 00:10:26.863 "is_configured": true, 00:10:26.863 "data_offset": 0, 00:10:26.863 "data_size": 65536 00:10:26.863 }, 00:10:26.863 { 00:10:26.863 "name": "BaseBdev2", 00:10:26.863 "uuid": "c027c4c2-6a11-4539-b844-c6414ba3005d", 00:10:26.863 "is_configured": true, 00:10:26.863 "data_offset": 0, 00:10:26.863 "data_size": 65536 00:10:26.863 }, 00:10:26.863 { 00:10:26.863 "name": "BaseBdev3", 00:10:26.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.863 "is_configured": false, 00:10:26.863 "data_offset": 0, 00:10:26.863 "data_size": 0 00:10:26.863 } 00:10:26.863 ] 00:10:26.863 }' 00:10:26.863 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.863 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.122 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:27.122 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.122 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.381 [2024-11-20 11:20:10.246420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:27.381 [2024-11-20 11:20:10.246497] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:27.381 [2024-11-20 11:20:10.246513] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:27.381 [2024-11-20 11:20:10.246782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:27.381 [2024-11-20 11:20:10.246954] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:27.381 [2024-11-20 11:20:10.246973] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:27.381 [2024-11-20 11:20:10.247252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.381 BaseBdev3 00:10:27.381 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.381 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:27.381 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:27.381 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.381 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:27.381 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.381 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.381 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.381 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.381 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.381 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.381 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:27.381 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.381 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.381 [ 00:10:27.381 { 00:10:27.381 "name": "BaseBdev3", 00:10:27.381 "aliases": [ 00:10:27.381 "0354b378-d7b8-4049-b5de-bf1c6d858b4a" 00:10:27.381 ], 00:10:27.381 "product_name": "Malloc disk", 00:10:27.381 "block_size": 512, 00:10:27.381 "num_blocks": 65536, 00:10:27.381 "uuid": "0354b378-d7b8-4049-b5de-bf1c6d858b4a", 00:10:27.381 "assigned_rate_limits": { 00:10:27.381 "rw_ios_per_sec": 0, 00:10:27.381 "rw_mbytes_per_sec": 0, 00:10:27.381 "r_mbytes_per_sec": 0, 00:10:27.381 "w_mbytes_per_sec": 0 00:10:27.381 }, 00:10:27.381 "claimed": true, 00:10:27.381 "claim_type": "exclusive_write", 00:10:27.381 "zoned": false, 00:10:27.381 "supported_io_types": { 00:10:27.381 "read": true, 00:10:27.382 "write": true, 00:10:27.382 "unmap": true, 00:10:27.382 "flush": true, 00:10:27.382 "reset": true, 00:10:27.382 "nvme_admin": false, 00:10:27.382 "nvme_io": false, 00:10:27.382 "nvme_io_md": false, 00:10:27.382 "write_zeroes": true, 00:10:27.382 "zcopy": true, 00:10:27.382 "get_zone_info": false, 00:10:27.382 "zone_management": false, 00:10:27.382 "zone_append": false, 00:10:27.382 "compare": false, 00:10:27.382 "compare_and_write": false, 00:10:27.382 "abort": true, 00:10:27.382 "seek_hole": false, 00:10:27.382 "seek_data": false, 00:10:27.382 "copy": true, 00:10:27.382 "nvme_iov_md": false 00:10:27.382 }, 00:10:27.382 "memory_domains": [ 00:10:27.382 { 00:10:27.382 "dma_device_id": "system", 00:10:27.382 "dma_device_type": 1 00:10:27.382 }, 00:10:27.382 { 00:10:27.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.382 "dma_device_type": 2 00:10:27.382 } 00:10:27.382 ], 00:10:27.382 "driver_specific": {} 00:10:27.382 } 00:10:27.382 ] 00:10:27.382 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.382 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:27.382 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:27.382 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.382 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:27.382 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.382 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.382 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.382 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.382 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.382 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.382 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.382 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.382 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.382 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.382 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.382 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.382 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.382 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.382 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.382 "name": "Existed_Raid", 00:10:27.382 "uuid": "6f2d1e72-59e7-4f72-8082-be7be85c0823", 00:10:27.382 "strip_size_kb": 64, 00:10:27.382 "state": "online", 00:10:27.382 "raid_level": "concat", 00:10:27.382 "superblock": false, 00:10:27.382 "num_base_bdevs": 3, 00:10:27.382 "num_base_bdevs_discovered": 3, 00:10:27.382 "num_base_bdevs_operational": 3, 00:10:27.382 "base_bdevs_list": [ 00:10:27.382 { 00:10:27.382 "name": "BaseBdev1", 00:10:27.382 "uuid": "7a79e5fb-8a11-4b2f-b172-99ca28d1de45", 00:10:27.382 "is_configured": true, 00:10:27.382 "data_offset": 0, 00:10:27.382 "data_size": 65536 00:10:27.382 }, 00:10:27.382 { 00:10:27.382 "name": "BaseBdev2", 00:10:27.382 "uuid": "c027c4c2-6a11-4539-b844-c6414ba3005d", 00:10:27.382 "is_configured": true, 00:10:27.382 "data_offset": 0, 00:10:27.382 "data_size": 65536 00:10:27.382 }, 00:10:27.382 { 00:10:27.382 "name": "BaseBdev3", 00:10:27.382 "uuid": "0354b378-d7b8-4049-b5de-bf1c6d858b4a", 00:10:27.382 "is_configured": true, 00:10:27.382 "data_offset": 0, 00:10:27.382 "data_size": 65536 00:10:27.382 } 00:10:27.382 ] 00:10:27.382 }' 00:10:27.382 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.382 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.950 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:27.950 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:27.950 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:27.950 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:27.950 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:27.950 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:27.950 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:27.950 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:27.950 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.950 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.950 [2024-11-20 11:20:10.785934] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:27.950 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.950 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:27.951 "name": "Existed_Raid", 00:10:27.951 "aliases": [ 00:10:27.951 "6f2d1e72-59e7-4f72-8082-be7be85c0823" 00:10:27.951 ], 00:10:27.951 "product_name": "Raid Volume", 00:10:27.951 "block_size": 512, 00:10:27.951 "num_blocks": 196608, 00:10:27.951 "uuid": "6f2d1e72-59e7-4f72-8082-be7be85c0823", 00:10:27.951 "assigned_rate_limits": { 00:10:27.951 "rw_ios_per_sec": 0, 00:10:27.951 "rw_mbytes_per_sec": 0, 00:10:27.951 "r_mbytes_per_sec": 0, 00:10:27.951 "w_mbytes_per_sec": 0 00:10:27.951 }, 00:10:27.951 "claimed": false, 00:10:27.951 "zoned": false, 00:10:27.951 "supported_io_types": { 00:10:27.951 "read": true, 00:10:27.951 "write": true, 00:10:27.951 "unmap": true, 00:10:27.951 "flush": true, 00:10:27.951 "reset": true, 00:10:27.951 "nvme_admin": false, 00:10:27.951 "nvme_io": false, 00:10:27.951 "nvme_io_md": false, 00:10:27.951 "write_zeroes": true, 00:10:27.951 "zcopy": false, 00:10:27.951 "get_zone_info": false, 00:10:27.951 "zone_management": false, 00:10:27.951 "zone_append": false, 00:10:27.951 "compare": false, 00:10:27.951 "compare_and_write": false, 00:10:27.951 "abort": false, 00:10:27.951 "seek_hole": false, 00:10:27.951 "seek_data": false, 00:10:27.951 "copy": false, 00:10:27.951 "nvme_iov_md": false 00:10:27.951 }, 00:10:27.951 "memory_domains": [ 00:10:27.951 { 00:10:27.951 "dma_device_id": "system", 00:10:27.951 "dma_device_type": 1 00:10:27.951 }, 00:10:27.951 { 00:10:27.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.951 "dma_device_type": 2 00:10:27.951 }, 00:10:27.951 { 00:10:27.951 "dma_device_id": "system", 00:10:27.951 "dma_device_type": 1 00:10:27.951 }, 00:10:27.951 { 00:10:27.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.951 "dma_device_type": 2 00:10:27.951 }, 00:10:27.951 { 00:10:27.951 "dma_device_id": "system", 00:10:27.951 "dma_device_type": 1 00:10:27.951 }, 00:10:27.951 { 00:10:27.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.951 "dma_device_type": 2 00:10:27.951 } 00:10:27.951 ], 00:10:27.951 "driver_specific": { 00:10:27.951 "raid": { 00:10:27.951 "uuid": "6f2d1e72-59e7-4f72-8082-be7be85c0823", 00:10:27.951 "strip_size_kb": 64, 00:10:27.951 "state": "online", 00:10:27.951 "raid_level": "concat", 00:10:27.951 "superblock": false, 00:10:27.951 "num_base_bdevs": 3, 00:10:27.951 "num_base_bdevs_discovered": 3, 00:10:27.951 "num_base_bdevs_operational": 3, 00:10:27.951 "base_bdevs_list": [ 00:10:27.951 { 00:10:27.951 "name": "BaseBdev1", 00:10:27.951 "uuid": "7a79e5fb-8a11-4b2f-b172-99ca28d1de45", 00:10:27.951 "is_configured": true, 00:10:27.951 "data_offset": 0, 00:10:27.951 "data_size": 65536 00:10:27.951 }, 00:10:27.951 { 00:10:27.951 "name": "BaseBdev2", 00:10:27.951 "uuid": "c027c4c2-6a11-4539-b844-c6414ba3005d", 00:10:27.951 "is_configured": true, 00:10:27.951 "data_offset": 0, 00:10:27.951 "data_size": 65536 00:10:27.951 }, 00:10:27.951 { 00:10:27.951 "name": "BaseBdev3", 00:10:27.951 "uuid": "0354b378-d7b8-4049-b5de-bf1c6d858b4a", 00:10:27.951 "is_configured": true, 00:10:27.951 "data_offset": 0, 00:10:27.951 "data_size": 65536 00:10:27.951 } 00:10:27.951 ] 00:10:27.951 } 00:10:27.951 } 00:10:27.951 }' 00:10:27.951 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:27.951 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:27.951 BaseBdev2 00:10:27.951 BaseBdev3' 00:10:27.951 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.951 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:27.951 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.951 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:27.951 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.951 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.951 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.951 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.951 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.951 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.951 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.951 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:27.951 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.951 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.951 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.951 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.951 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.951 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.951 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.951 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:27.951 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.951 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.951 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.951 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.211 [2024-11-20 11:20:11.085142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:28.211 [2024-11-20 11:20:11.085245] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:28.211 [2024-11-20 11:20:11.085353] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.211 "name": "Existed_Raid", 00:10:28.211 "uuid": "6f2d1e72-59e7-4f72-8082-be7be85c0823", 00:10:28.211 "strip_size_kb": 64, 00:10:28.211 "state": "offline", 00:10:28.211 "raid_level": "concat", 00:10:28.211 "superblock": false, 00:10:28.211 "num_base_bdevs": 3, 00:10:28.211 "num_base_bdevs_discovered": 2, 00:10:28.211 "num_base_bdevs_operational": 2, 00:10:28.211 "base_bdevs_list": [ 00:10:28.211 { 00:10:28.211 "name": null, 00:10:28.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.211 "is_configured": false, 00:10:28.211 "data_offset": 0, 00:10:28.211 "data_size": 65536 00:10:28.211 }, 00:10:28.211 { 00:10:28.211 "name": "BaseBdev2", 00:10:28.211 "uuid": "c027c4c2-6a11-4539-b844-c6414ba3005d", 00:10:28.211 "is_configured": true, 00:10:28.211 "data_offset": 0, 00:10:28.211 "data_size": 65536 00:10:28.211 }, 00:10:28.211 { 00:10:28.211 "name": "BaseBdev3", 00:10:28.211 "uuid": "0354b378-d7b8-4049-b5de-bf1c6d858b4a", 00:10:28.211 "is_configured": true, 00:10:28.211 "data_offset": 0, 00:10:28.211 "data_size": 65536 00:10:28.211 } 00:10:28.211 ] 00:10:28.211 }' 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.211 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.782 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:28.782 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.782 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.782 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.782 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.782 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:28.782 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.782 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:28.782 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:28.782 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:28.782 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.782 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.782 [2024-11-20 11:20:11.722534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:28.782 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.782 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:28.782 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.782 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.782 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:28.782 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.782 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.782 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.782 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:28.782 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:28.782 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:28.782 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.782 11:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.782 [2024-11-20 11:20:11.895366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:28.782 [2024-11-20 11:20:11.895494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.042 BaseBdev2 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.042 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.042 [ 00:10:29.042 { 00:10:29.042 "name": "BaseBdev2", 00:10:29.042 "aliases": [ 00:10:29.042 "21b96ef1-d4b1-496a-a954-6fa894e4ac57" 00:10:29.042 ], 00:10:29.042 "product_name": "Malloc disk", 00:10:29.042 "block_size": 512, 00:10:29.043 "num_blocks": 65536, 00:10:29.043 "uuid": "21b96ef1-d4b1-496a-a954-6fa894e4ac57", 00:10:29.043 "assigned_rate_limits": { 00:10:29.043 "rw_ios_per_sec": 0, 00:10:29.043 "rw_mbytes_per_sec": 0, 00:10:29.043 "r_mbytes_per_sec": 0, 00:10:29.043 "w_mbytes_per_sec": 0 00:10:29.043 }, 00:10:29.043 "claimed": false, 00:10:29.043 "zoned": false, 00:10:29.043 "supported_io_types": { 00:10:29.043 "read": true, 00:10:29.043 "write": true, 00:10:29.043 "unmap": true, 00:10:29.043 "flush": true, 00:10:29.043 "reset": true, 00:10:29.043 "nvme_admin": false, 00:10:29.043 "nvme_io": false, 00:10:29.043 "nvme_io_md": false, 00:10:29.043 "write_zeroes": true, 00:10:29.043 "zcopy": true, 00:10:29.043 "get_zone_info": false, 00:10:29.043 "zone_management": false, 00:10:29.043 "zone_append": false, 00:10:29.043 "compare": false, 00:10:29.043 "compare_and_write": false, 00:10:29.043 "abort": true, 00:10:29.043 "seek_hole": false, 00:10:29.043 "seek_data": false, 00:10:29.043 "copy": true, 00:10:29.043 "nvme_iov_md": false 00:10:29.043 }, 00:10:29.043 "memory_domains": [ 00:10:29.043 { 00:10:29.043 "dma_device_id": "system", 00:10:29.043 "dma_device_type": 1 00:10:29.043 }, 00:10:29.043 { 00:10:29.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.043 "dma_device_type": 2 00:10:29.043 } 00:10:29.043 ], 00:10:29.043 "driver_specific": {} 00:10:29.043 } 00:10:29.043 ] 00:10:29.043 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.043 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:29.043 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:29.043 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.043 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:29.043 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.043 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.303 BaseBdev3 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.303 [ 00:10:29.303 { 00:10:29.303 "name": "BaseBdev3", 00:10:29.303 "aliases": [ 00:10:29.303 "4b34b928-80cf-4d03-bb64-c4b804166ab6" 00:10:29.303 ], 00:10:29.303 "product_name": "Malloc disk", 00:10:29.303 "block_size": 512, 00:10:29.303 "num_blocks": 65536, 00:10:29.303 "uuid": "4b34b928-80cf-4d03-bb64-c4b804166ab6", 00:10:29.303 "assigned_rate_limits": { 00:10:29.303 "rw_ios_per_sec": 0, 00:10:29.303 "rw_mbytes_per_sec": 0, 00:10:29.303 "r_mbytes_per_sec": 0, 00:10:29.303 "w_mbytes_per_sec": 0 00:10:29.303 }, 00:10:29.303 "claimed": false, 00:10:29.303 "zoned": false, 00:10:29.303 "supported_io_types": { 00:10:29.303 "read": true, 00:10:29.303 "write": true, 00:10:29.303 "unmap": true, 00:10:29.303 "flush": true, 00:10:29.303 "reset": true, 00:10:29.303 "nvme_admin": false, 00:10:29.303 "nvme_io": false, 00:10:29.303 "nvme_io_md": false, 00:10:29.303 "write_zeroes": true, 00:10:29.303 "zcopy": true, 00:10:29.303 "get_zone_info": false, 00:10:29.303 "zone_management": false, 00:10:29.303 "zone_append": false, 00:10:29.303 "compare": false, 00:10:29.303 "compare_and_write": false, 00:10:29.303 "abort": true, 00:10:29.303 "seek_hole": false, 00:10:29.303 "seek_data": false, 00:10:29.303 "copy": true, 00:10:29.303 "nvme_iov_md": false 00:10:29.303 }, 00:10:29.303 "memory_domains": [ 00:10:29.303 { 00:10:29.303 "dma_device_id": "system", 00:10:29.303 "dma_device_type": 1 00:10:29.303 }, 00:10:29.303 { 00:10:29.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.303 "dma_device_type": 2 00:10:29.303 } 00:10:29.303 ], 00:10:29.303 "driver_specific": {} 00:10:29.303 } 00:10:29.303 ] 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.303 [2024-11-20 11:20:12.233794] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:29.303 [2024-11-20 11:20:12.233927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:29.303 [2024-11-20 11:20:12.233982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.303 [2024-11-20 11:20:12.235975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.303 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.303 "name": "Existed_Raid", 00:10:29.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.303 "strip_size_kb": 64, 00:10:29.303 "state": "configuring", 00:10:29.303 "raid_level": "concat", 00:10:29.303 "superblock": false, 00:10:29.303 "num_base_bdevs": 3, 00:10:29.303 "num_base_bdevs_discovered": 2, 00:10:29.303 "num_base_bdevs_operational": 3, 00:10:29.303 "base_bdevs_list": [ 00:10:29.303 { 00:10:29.303 "name": "BaseBdev1", 00:10:29.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.303 "is_configured": false, 00:10:29.303 "data_offset": 0, 00:10:29.303 "data_size": 0 00:10:29.303 }, 00:10:29.303 { 00:10:29.303 "name": "BaseBdev2", 00:10:29.303 "uuid": "21b96ef1-d4b1-496a-a954-6fa894e4ac57", 00:10:29.303 "is_configured": true, 00:10:29.303 "data_offset": 0, 00:10:29.303 "data_size": 65536 00:10:29.303 }, 00:10:29.303 { 00:10:29.303 "name": "BaseBdev3", 00:10:29.304 "uuid": "4b34b928-80cf-4d03-bb64-c4b804166ab6", 00:10:29.304 "is_configured": true, 00:10:29.304 "data_offset": 0, 00:10:29.304 "data_size": 65536 00:10:29.304 } 00:10:29.304 ] 00:10:29.304 }' 00:10:29.304 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.304 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.876 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:29.876 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.876 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.876 [2024-11-20 11:20:12.772922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:29.876 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.876 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:29.876 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.876 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.876 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.876 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.876 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.876 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.876 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.876 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.876 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.876 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.876 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.876 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.876 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.876 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.876 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.876 "name": "Existed_Raid", 00:10:29.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.876 "strip_size_kb": 64, 00:10:29.876 "state": "configuring", 00:10:29.876 "raid_level": "concat", 00:10:29.876 "superblock": false, 00:10:29.876 "num_base_bdevs": 3, 00:10:29.876 "num_base_bdevs_discovered": 1, 00:10:29.876 "num_base_bdevs_operational": 3, 00:10:29.876 "base_bdevs_list": [ 00:10:29.876 { 00:10:29.876 "name": "BaseBdev1", 00:10:29.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.876 "is_configured": false, 00:10:29.876 "data_offset": 0, 00:10:29.876 "data_size": 0 00:10:29.876 }, 00:10:29.876 { 00:10:29.876 "name": null, 00:10:29.876 "uuid": "21b96ef1-d4b1-496a-a954-6fa894e4ac57", 00:10:29.876 "is_configured": false, 00:10:29.876 "data_offset": 0, 00:10:29.876 "data_size": 65536 00:10:29.876 }, 00:10:29.876 { 00:10:29.876 "name": "BaseBdev3", 00:10:29.876 "uuid": "4b34b928-80cf-4d03-bb64-c4b804166ab6", 00:10:29.876 "is_configured": true, 00:10:29.876 "data_offset": 0, 00:10:29.876 "data_size": 65536 00:10:29.876 } 00:10:29.876 ] 00:10:29.876 }' 00:10:29.876 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.876 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.136 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.136 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.136 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.136 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.419 [2024-11-20 11:20:13.326339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.419 BaseBdev1 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.419 [ 00:10:30.419 { 00:10:30.419 "name": "BaseBdev1", 00:10:30.419 "aliases": [ 00:10:30.419 "bf388568-5892-4591-8eaa-400f20314971" 00:10:30.419 ], 00:10:30.419 "product_name": "Malloc disk", 00:10:30.419 "block_size": 512, 00:10:30.419 "num_blocks": 65536, 00:10:30.419 "uuid": "bf388568-5892-4591-8eaa-400f20314971", 00:10:30.419 "assigned_rate_limits": { 00:10:30.419 "rw_ios_per_sec": 0, 00:10:30.419 "rw_mbytes_per_sec": 0, 00:10:30.419 "r_mbytes_per_sec": 0, 00:10:30.419 "w_mbytes_per_sec": 0 00:10:30.419 }, 00:10:30.419 "claimed": true, 00:10:30.419 "claim_type": "exclusive_write", 00:10:30.419 "zoned": false, 00:10:30.419 "supported_io_types": { 00:10:30.419 "read": true, 00:10:30.419 "write": true, 00:10:30.419 "unmap": true, 00:10:30.419 "flush": true, 00:10:30.419 "reset": true, 00:10:30.419 "nvme_admin": false, 00:10:30.419 "nvme_io": false, 00:10:30.419 "nvme_io_md": false, 00:10:30.419 "write_zeroes": true, 00:10:30.419 "zcopy": true, 00:10:30.419 "get_zone_info": false, 00:10:30.419 "zone_management": false, 00:10:30.419 "zone_append": false, 00:10:30.419 "compare": false, 00:10:30.419 "compare_and_write": false, 00:10:30.419 "abort": true, 00:10:30.419 "seek_hole": false, 00:10:30.419 "seek_data": false, 00:10:30.419 "copy": true, 00:10:30.419 "nvme_iov_md": false 00:10:30.419 }, 00:10:30.419 "memory_domains": [ 00:10:30.419 { 00:10:30.419 "dma_device_id": "system", 00:10:30.419 "dma_device_type": 1 00:10:30.419 }, 00:10:30.419 { 00:10:30.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.419 "dma_device_type": 2 00:10:30.419 } 00:10:30.419 ], 00:10:30.419 "driver_specific": {} 00:10:30.419 } 00:10:30.419 ] 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.419 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.420 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.420 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.420 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.420 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.420 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.420 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.420 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.420 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.420 "name": "Existed_Raid", 00:10:30.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.420 "strip_size_kb": 64, 00:10:30.420 "state": "configuring", 00:10:30.420 "raid_level": "concat", 00:10:30.420 "superblock": false, 00:10:30.420 "num_base_bdevs": 3, 00:10:30.420 "num_base_bdevs_discovered": 2, 00:10:30.420 "num_base_bdevs_operational": 3, 00:10:30.420 "base_bdevs_list": [ 00:10:30.420 { 00:10:30.420 "name": "BaseBdev1", 00:10:30.420 "uuid": "bf388568-5892-4591-8eaa-400f20314971", 00:10:30.420 "is_configured": true, 00:10:30.420 "data_offset": 0, 00:10:30.420 "data_size": 65536 00:10:30.420 }, 00:10:30.420 { 00:10:30.420 "name": null, 00:10:30.420 "uuid": "21b96ef1-d4b1-496a-a954-6fa894e4ac57", 00:10:30.420 "is_configured": false, 00:10:30.420 "data_offset": 0, 00:10:30.420 "data_size": 65536 00:10:30.420 }, 00:10:30.420 { 00:10:30.420 "name": "BaseBdev3", 00:10:30.420 "uuid": "4b34b928-80cf-4d03-bb64-c4b804166ab6", 00:10:30.420 "is_configured": true, 00:10:30.420 "data_offset": 0, 00:10:30.420 "data_size": 65536 00:10:30.420 } 00:10:30.420 ] 00:10:30.420 }' 00:10:30.420 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.420 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.988 [2024-11-20 11:20:13.869511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.988 "name": "Existed_Raid", 00:10:30.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.988 "strip_size_kb": 64, 00:10:30.988 "state": "configuring", 00:10:30.988 "raid_level": "concat", 00:10:30.988 "superblock": false, 00:10:30.988 "num_base_bdevs": 3, 00:10:30.988 "num_base_bdevs_discovered": 1, 00:10:30.988 "num_base_bdevs_operational": 3, 00:10:30.988 "base_bdevs_list": [ 00:10:30.988 { 00:10:30.988 "name": "BaseBdev1", 00:10:30.988 "uuid": "bf388568-5892-4591-8eaa-400f20314971", 00:10:30.988 "is_configured": true, 00:10:30.988 "data_offset": 0, 00:10:30.988 "data_size": 65536 00:10:30.988 }, 00:10:30.988 { 00:10:30.988 "name": null, 00:10:30.988 "uuid": "21b96ef1-d4b1-496a-a954-6fa894e4ac57", 00:10:30.988 "is_configured": false, 00:10:30.988 "data_offset": 0, 00:10:30.988 "data_size": 65536 00:10:30.988 }, 00:10:30.988 { 00:10:30.988 "name": null, 00:10:30.988 "uuid": "4b34b928-80cf-4d03-bb64-c4b804166ab6", 00:10:30.988 "is_configured": false, 00:10:30.988 "data_offset": 0, 00:10:30.988 "data_size": 65536 00:10:30.988 } 00:10:30.988 ] 00:10:30.988 }' 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.988 11:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.247 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.247 11:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.247 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:31.247 11:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.247 11:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.507 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:31.507 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:31.507 11:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.507 11:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.507 [2024-11-20 11:20:14.376653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.507 11:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.507 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:31.507 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.507 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.507 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.507 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.507 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.507 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.507 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.507 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.507 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.507 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.507 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.507 11:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.507 11:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.507 11:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.507 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.507 "name": "Existed_Raid", 00:10:31.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.507 "strip_size_kb": 64, 00:10:31.507 "state": "configuring", 00:10:31.507 "raid_level": "concat", 00:10:31.507 "superblock": false, 00:10:31.507 "num_base_bdevs": 3, 00:10:31.507 "num_base_bdevs_discovered": 2, 00:10:31.507 "num_base_bdevs_operational": 3, 00:10:31.507 "base_bdevs_list": [ 00:10:31.507 { 00:10:31.507 "name": "BaseBdev1", 00:10:31.507 "uuid": "bf388568-5892-4591-8eaa-400f20314971", 00:10:31.507 "is_configured": true, 00:10:31.507 "data_offset": 0, 00:10:31.507 "data_size": 65536 00:10:31.507 }, 00:10:31.507 { 00:10:31.507 "name": null, 00:10:31.507 "uuid": "21b96ef1-d4b1-496a-a954-6fa894e4ac57", 00:10:31.507 "is_configured": false, 00:10:31.507 "data_offset": 0, 00:10:31.507 "data_size": 65536 00:10:31.507 }, 00:10:31.507 { 00:10:31.507 "name": "BaseBdev3", 00:10:31.507 "uuid": "4b34b928-80cf-4d03-bb64-c4b804166ab6", 00:10:31.507 "is_configured": true, 00:10:31.507 "data_offset": 0, 00:10:31.507 "data_size": 65536 00:10:31.507 } 00:10:31.507 ] 00:10:31.507 }' 00:10:31.507 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.507 11:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.766 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.766 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:31.766 11:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.766 11:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.766 11:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.766 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:31.766 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:31.766 11:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.766 11:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.026 [2024-11-20 11:20:14.883794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.026 11:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.026 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:32.026 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.026 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.026 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.026 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.026 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.026 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.026 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.026 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.026 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.026 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.026 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.026 11:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.026 11:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.026 11:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.026 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.026 "name": "Existed_Raid", 00:10:32.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.026 "strip_size_kb": 64, 00:10:32.026 "state": "configuring", 00:10:32.026 "raid_level": "concat", 00:10:32.026 "superblock": false, 00:10:32.026 "num_base_bdevs": 3, 00:10:32.026 "num_base_bdevs_discovered": 1, 00:10:32.026 "num_base_bdevs_operational": 3, 00:10:32.026 "base_bdevs_list": [ 00:10:32.026 { 00:10:32.026 "name": null, 00:10:32.026 "uuid": "bf388568-5892-4591-8eaa-400f20314971", 00:10:32.026 "is_configured": false, 00:10:32.026 "data_offset": 0, 00:10:32.026 "data_size": 65536 00:10:32.026 }, 00:10:32.026 { 00:10:32.026 "name": null, 00:10:32.026 "uuid": "21b96ef1-d4b1-496a-a954-6fa894e4ac57", 00:10:32.026 "is_configured": false, 00:10:32.026 "data_offset": 0, 00:10:32.026 "data_size": 65536 00:10:32.026 }, 00:10:32.026 { 00:10:32.026 "name": "BaseBdev3", 00:10:32.026 "uuid": "4b34b928-80cf-4d03-bb64-c4b804166ab6", 00:10:32.026 "is_configured": true, 00:10:32.026 "data_offset": 0, 00:10:32.026 "data_size": 65536 00:10:32.026 } 00:10:32.026 ] 00:10:32.026 }' 00:10:32.026 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.026 11:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.596 [2024-11-20 11:20:15.519798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.596 "name": "Existed_Raid", 00:10:32.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.596 "strip_size_kb": 64, 00:10:32.596 "state": "configuring", 00:10:32.596 "raid_level": "concat", 00:10:32.596 "superblock": false, 00:10:32.596 "num_base_bdevs": 3, 00:10:32.596 "num_base_bdevs_discovered": 2, 00:10:32.596 "num_base_bdevs_operational": 3, 00:10:32.596 "base_bdevs_list": [ 00:10:32.596 { 00:10:32.596 "name": null, 00:10:32.596 "uuid": "bf388568-5892-4591-8eaa-400f20314971", 00:10:32.596 "is_configured": false, 00:10:32.596 "data_offset": 0, 00:10:32.596 "data_size": 65536 00:10:32.596 }, 00:10:32.596 { 00:10:32.596 "name": "BaseBdev2", 00:10:32.596 "uuid": "21b96ef1-d4b1-496a-a954-6fa894e4ac57", 00:10:32.596 "is_configured": true, 00:10:32.596 "data_offset": 0, 00:10:32.596 "data_size": 65536 00:10:32.596 }, 00:10:32.596 { 00:10:32.596 "name": "BaseBdev3", 00:10:32.596 "uuid": "4b34b928-80cf-4d03-bb64-c4b804166ab6", 00:10:32.596 "is_configured": true, 00:10:32.596 "data_offset": 0, 00:10:32.596 "data_size": 65536 00:10:32.596 } 00:10:32.596 ] 00:10:32.596 }' 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.596 11:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.855 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:32.855 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.855 11:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.855 11:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.115 11:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.115 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:33.115 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:33.115 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.115 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.115 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.115 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.115 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bf388568-5892-4591-8eaa-400f20314971 00:10:33.115 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.115 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.115 [2024-11-20 11:20:16.085356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:33.115 [2024-11-20 11:20:16.085409] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:33.115 [2024-11-20 11:20:16.085418] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:33.115 [2024-11-20 11:20:16.085692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:33.115 [2024-11-20 11:20:16.085847] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:33.115 [2024-11-20 11:20:16.085856] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:33.115 [2024-11-20 11:20:16.086149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.115 NewBaseBdev 00:10:33.115 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.115 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:33.115 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:33.115 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:33.115 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:33.115 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:33.115 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:33.115 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:33.115 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.115 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.115 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.115 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:33.115 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.115 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.115 [ 00:10:33.115 { 00:10:33.115 "name": "NewBaseBdev", 00:10:33.115 "aliases": [ 00:10:33.115 "bf388568-5892-4591-8eaa-400f20314971" 00:10:33.115 ], 00:10:33.115 "product_name": "Malloc disk", 00:10:33.115 "block_size": 512, 00:10:33.115 "num_blocks": 65536, 00:10:33.115 "uuid": "bf388568-5892-4591-8eaa-400f20314971", 00:10:33.115 "assigned_rate_limits": { 00:10:33.115 "rw_ios_per_sec": 0, 00:10:33.115 "rw_mbytes_per_sec": 0, 00:10:33.115 "r_mbytes_per_sec": 0, 00:10:33.115 "w_mbytes_per_sec": 0 00:10:33.115 }, 00:10:33.115 "claimed": true, 00:10:33.115 "claim_type": "exclusive_write", 00:10:33.115 "zoned": false, 00:10:33.115 "supported_io_types": { 00:10:33.115 "read": true, 00:10:33.115 "write": true, 00:10:33.115 "unmap": true, 00:10:33.115 "flush": true, 00:10:33.115 "reset": true, 00:10:33.115 "nvme_admin": false, 00:10:33.115 "nvme_io": false, 00:10:33.115 "nvme_io_md": false, 00:10:33.115 "write_zeroes": true, 00:10:33.115 "zcopy": true, 00:10:33.115 "get_zone_info": false, 00:10:33.115 "zone_management": false, 00:10:33.115 "zone_append": false, 00:10:33.115 "compare": false, 00:10:33.116 "compare_and_write": false, 00:10:33.116 "abort": true, 00:10:33.116 "seek_hole": false, 00:10:33.116 "seek_data": false, 00:10:33.116 "copy": true, 00:10:33.116 "nvme_iov_md": false 00:10:33.116 }, 00:10:33.116 "memory_domains": [ 00:10:33.116 { 00:10:33.116 "dma_device_id": "system", 00:10:33.116 "dma_device_type": 1 00:10:33.116 }, 00:10:33.116 { 00:10:33.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.116 "dma_device_type": 2 00:10:33.116 } 00:10:33.116 ], 00:10:33.116 "driver_specific": {} 00:10:33.116 } 00:10:33.116 ] 00:10:33.116 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.116 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:33.116 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:33.116 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.116 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.116 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.116 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.116 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.116 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.116 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.116 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.116 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.116 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.116 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.116 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.116 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.116 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.116 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.116 "name": "Existed_Raid", 00:10:33.116 "uuid": "6c1de48f-bedc-4fa2-8136-6626b0b096e8", 00:10:33.116 "strip_size_kb": 64, 00:10:33.116 "state": "online", 00:10:33.116 "raid_level": "concat", 00:10:33.116 "superblock": false, 00:10:33.116 "num_base_bdevs": 3, 00:10:33.116 "num_base_bdevs_discovered": 3, 00:10:33.116 "num_base_bdevs_operational": 3, 00:10:33.116 "base_bdevs_list": [ 00:10:33.116 { 00:10:33.116 "name": "NewBaseBdev", 00:10:33.116 "uuid": "bf388568-5892-4591-8eaa-400f20314971", 00:10:33.116 "is_configured": true, 00:10:33.116 "data_offset": 0, 00:10:33.116 "data_size": 65536 00:10:33.116 }, 00:10:33.116 { 00:10:33.116 "name": "BaseBdev2", 00:10:33.116 "uuid": "21b96ef1-d4b1-496a-a954-6fa894e4ac57", 00:10:33.116 "is_configured": true, 00:10:33.116 "data_offset": 0, 00:10:33.116 "data_size": 65536 00:10:33.116 }, 00:10:33.116 { 00:10:33.116 "name": "BaseBdev3", 00:10:33.116 "uuid": "4b34b928-80cf-4d03-bb64-c4b804166ab6", 00:10:33.116 "is_configured": true, 00:10:33.116 "data_offset": 0, 00:10:33.116 "data_size": 65536 00:10:33.116 } 00:10:33.116 ] 00:10:33.116 }' 00:10:33.116 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.116 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.684 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:33.684 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:33.684 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:33.684 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:33.684 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:33.684 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:33.684 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:33.684 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:33.684 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.684 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.684 [2024-11-20 11:20:16.616826] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:33.684 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.684 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:33.684 "name": "Existed_Raid", 00:10:33.684 "aliases": [ 00:10:33.684 "6c1de48f-bedc-4fa2-8136-6626b0b096e8" 00:10:33.684 ], 00:10:33.684 "product_name": "Raid Volume", 00:10:33.684 "block_size": 512, 00:10:33.684 "num_blocks": 196608, 00:10:33.684 "uuid": "6c1de48f-bedc-4fa2-8136-6626b0b096e8", 00:10:33.684 "assigned_rate_limits": { 00:10:33.684 "rw_ios_per_sec": 0, 00:10:33.684 "rw_mbytes_per_sec": 0, 00:10:33.684 "r_mbytes_per_sec": 0, 00:10:33.684 "w_mbytes_per_sec": 0 00:10:33.684 }, 00:10:33.684 "claimed": false, 00:10:33.684 "zoned": false, 00:10:33.684 "supported_io_types": { 00:10:33.684 "read": true, 00:10:33.684 "write": true, 00:10:33.684 "unmap": true, 00:10:33.684 "flush": true, 00:10:33.684 "reset": true, 00:10:33.684 "nvme_admin": false, 00:10:33.684 "nvme_io": false, 00:10:33.684 "nvme_io_md": false, 00:10:33.684 "write_zeroes": true, 00:10:33.684 "zcopy": false, 00:10:33.684 "get_zone_info": false, 00:10:33.684 "zone_management": false, 00:10:33.684 "zone_append": false, 00:10:33.684 "compare": false, 00:10:33.684 "compare_and_write": false, 00:10:33.684 "abort": false, 00:10:33.684 "seek_hole": false, 00:10:33.684 "seek_data": false, 00:10:33.684 "copy": false, 00:10:33.684 "nvme_iov_md": false 00:10:33.684 }, 00:10:33.684 "memory_domains": [ 00:10:33.684 { 00:10:33.684 "dma_device_id": "system", 00:10:33.684 "dma_device_type": 1 00:10:33.684 }, 00:10:33.684 { 00:10:33.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.684 "dma_device_type": 2 00:10:33.684 }, 00:10:33.684 { 00:10:33.684 "dma_device_id": "system", 00:10:33.684 "dma_device_type": 1 00:10:33.684 }, 00:10:33.684 { 00:10:33.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.684 "dma_device_type": 2 00:10:33.684 }, 00:10:33.684 { 00:10:33.685 "dma_device_id": "system", 00:10:33.685 "dma_device_type": 1 00:10:33.685 }, 00:10:33.685 { 00:10:33.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.685 "dma_device_type": 2 00:10:33.685 } 00:10:33.685 ], 00:10:33.685 "driver_specific": { 00:10:33.685 "raid": { 00:10:33.685 "uuid": "6c1de48f-bedc-4fa2-8136-6626b0b096e8", 00:10:33.685 "strip_size_kb": 64, 00:10:33.685 "state": "online", 00:10:33.685 "raid_level": "concat", 00:10:33.685 "superblock": false, 00:10:33.685 "num_base_bdevs": 3, 00:10:33.685 "num_base_bdevs_discovered": 3, 00:10:33.685 "num_base_bdevs_operational": 3, 00:10:33.685 "base_bdevs_list": [ 00:10:33.685 { 00:10:33.685 "name": "NewBaseBdev", 00:10:33.685 "uuid": "bf388568-5892-4591-8eaa-400f20314971", 00:10:33.685 "is_configured": true, 00:10:33.685 "data_offset": 0, 00:10:33.685 "data_size": 65536 00:10:33.685 }, 00:10:33.685 { 00:10:33.685 "name": "BaseBdev2", 00:10:33.685 "uuid": "21b96ef1-d4b1-496a-a954-6fa894e4ac57", 00:10:33.685 "is_configured": true, 00:10:33.685 "data_offset": 0, 00:10:33.685 "data_size": 65536 00:10:33.685 }, 00:10:33.685 { 00:10:33.685 "name": "BaseBdev3", 00:10:33.685 "uuid": "4b34b928-80cf-4d03-bb64-c4b804166ab6", 00:10:33.685 "is_configured": true, 00:10:33.685 "data_offset": 0, 00:10:33.685 "data_size": 65536 00:10:33.685 } 00:10:33.685 ] 00:10:33.685 } 00:10:33.685 } 00:10:33.685 }' 00:10:33.685 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:33.685 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:33.685 BaseBdev2 00:10:33.685 BaseBdev3' 00:10:33.685 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.685 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:33.685 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.685 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:33.685 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.685 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.685 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.685 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.685 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.685 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.685 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.685 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:33.685 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.685 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.685 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.944 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.944 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.944 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.944 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.944 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.944 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:33.944 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.944 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.944 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.944 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.944 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.944 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:33.944 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.944 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.944 [2024-11-20 11:20:16.900046] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:33.944 [2024-11-20 11:20:16.900135] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:33.944 [2024-11-20 11:20:16.900254] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.944 [2024-11-20 11:20:16.900337] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.944 [2024-11-20 11:20:16.900386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:33.944 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.944 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65734 00:10:33.944 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65734 ']' 00:10:33.944 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65734 00:10:33.945 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:33.945 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:33.945 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65734 00:10:33.945 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:33.945 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:33.945 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65734' 00:10:33.945 killing process with pid 65734 00:10:33.945 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65734 00:10:33.945 [2024-11-20 11:20:16.943389] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:33.945 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65734 00:10:34.204 [2024-11-20 11:20:17.253303] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:35.595 00:10:35.595 real 0m11.266s 00:10:35.595 user 0m17.997s 00:10:35.595 sys 0m2.008s 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.595 ************************************ 00:10:35.595 END TEST raid_state_function_test 00:10:35.595 ************************************ 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.595 11:20:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:10:35.595 11:20:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:35.595 11:20:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.595 11:20:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:35.595 ************************************ 00:10:35.595 START TEST raid_state_function_test_sb 00:10:35.595 ************************************ 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66361 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66361' 00:10:35.595 Process raid pid: 66361 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66361 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66361 ']' 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:35.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:35.595 11:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.595 [2024-11-20 11:20:18.594454] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:10:35.595 [2024-11-20 11:20:18.594625] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.854 [2024-11-20 11:20:18.765712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.854 [2024-11-20 11:20:18.895431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.114 [2024-11-20 11:20:19.119338] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.114 [2024-11-20 11:20:19.119373] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.372 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.372 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:36.372 11:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:36.372 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.372 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.372 [2024-11-20 11:20:19.485410] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:36.372 [2024-11-20 11:20:19.485478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:36.372 [2024-11-20 11:20:19.485490] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:36.372 [2024-11-20 11:20:19.485500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:36.372 [2024-11-20 11:20:19.485508] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:36.372 [2024-11-20 11:20:19.485518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:36.632 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.632 11:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:36.632 11:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.632 11:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.632 11:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.632 11:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.632 11:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.632 11:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.632 11:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.632 11:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.632 11:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.632 11:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.632 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.632 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.632 11:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.632 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.632 11:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.632 "name": "Existed_Raid", 00:10:36.632 "uuid": "b770f15b-4bc0-4a62-8d0b-44a195d7e120", 00:10:36.632 "strip_size_kb": 64, 00:10:36.632 "state": "configuring", 00:10:36.632 "raid_level": "concat", 00:10:36.632 "superblock": true, 00:10:36.632 "num_base_bdevs": 3, 00:10:36.632 "num_base_bdevs_discovered": 0, 00:10:36.632 "num_base_bdevs_operational": 3, 00:10:36.632 "base_bdevs_list": [ 00:10:36.632 { 00:10:36.632 "name": "BaseBdev1", 00:10:36.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.632 "is_configured": false, 00:10:36.632 "data_offset": 0, 00:10:36.632 "data_size": 0 00:10:36.632 }, 00:10:36.632 { 00:10:36.632 "name": "BaseBdev2", 00:10:36.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.632 "is_configured": false, 00:10:36.632 "data_offset": 0, 00:10:36.632 "data_size": 0 00:10:36.632 }, 00:10:36.632 { 00:10:36.632 "name": "BaseBdev3", 00:10:36.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.632 "is_configured": false, 00:10:36.632 "data_offset": 0, 00:10:36.632 "data_size": 0 00:10:36.632 } 00:10:36.632 ] 00:10:36.632 }' 00:10:36.632 11:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.632 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.892 11:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:36.892 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.892 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.892 [2024-11-20 11:20:19.928595] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:36.892 [2024-11-20 11:20:19.928693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:36.892 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.892 11:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:36.892 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.892 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.892 [2024-11-20 11:20:19.940589] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:36.892 [2024-11-20 11:20:19.940680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:36.892 [2024-11-20 11:20:19.940725] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:36.892 [2024-11-20 11:20:19.940749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:36.892 [2024-11-20 11:20:19.940775] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:36.892 [2024-11-20 11:20:19.940809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:36.892 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.892 11:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:36.892 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.892 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.892 [2024-11-20 11:20:19.994076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.892 BaseBdev1 00:10:36.892 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.892 11:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:36.892 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:36.892 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.892 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:36.892 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.892 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.892 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.892 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.892 11:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.152 11:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.152 11:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:37.152 11:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.152 11:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.152 [ 00:10:37.152 { 00:10:37.152 "name": "BaseBdev1", 00:10:37.152 "aliases": [ 00:10:37.152 "daa1a5c5-9208-49a7-acb2-59f5b118032b" 00:10:37.152 ], 00:10:37.152 "product_name": "Malloc disk", 00:10:37.152 "block_size": 512, 00:10:37.152 "num_blocks": 65536, 00:10:37.152 "uuid": "daa1a5c5-9208-49a7-acb2-59f5b118032b", 00:10:37.152 "assigned_rate_limits": { 00:10:37.152 "rw_ios_per_sec": 0, 00:10:37.152 "rw_mbytes_per_sec": 0, 00:10:37.152 "r_mbytes_per_sec": 0, 00:10:37.152 "w_mbytes_per_sec": 0 00:10:37.152 }, 00:10:37.152 "claimed": true, 00:10:37.152 "claim_type": "exclusive_write", 00:10:37.152 "zoned": false, 00:10:37.152 "supported_io_types": { 00:10:37.152 "read": true, 00:10:37.152 "write": true, 00:10:37.152 "unmap": true, 00:10:37.152 "flush": true, 00:10:37.152 "reset": true, 00:10:37.152 "nvme_admin": false, 00:10:37.152 "nvme_io": false, 00:10:37.152 "nvme_io_md": false, 00:10:37.152 "write_zeroes": true, 00:10:37.152 "zcopy": true, 00:10:37.152 "get_zone_info": false, 00:10:37.152 "zone_management": false, 00:10:37.152 "zone_append": false, 00:10:37.152 "compare": false, 00:10:37.152 "compare_and_write": false, 00:10:37.152 "abort": true, 00:10:37.152 "seek_hole": false, 00:10:37.152 "seek_data": false, 00:10:37.152 "copy": true, 00:10:37.152 "nvme_iov_md": false 00:10:37.152 }, 00:10:37.152 "memory_domains": [ 00:10:37.152 { 00:10:37.152 "dma_device_id": "system", 00:10:37.152 "dma_device_type": 1 00:10:37.152 }, 00:10:37.152 { 00:10:37.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.152 "dma_device_type": 2 00:10:37.152 } 00:10:37.152 ], 00:10:37.153 "driver_specific": {} 00:10:37.153 } 00:10:37.153 ] 00:10:37.153 11:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.153 11:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:37.153 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:37.153 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.153 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.153 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.153 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.153 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.153 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.153 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.153 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.153 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.153 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.153 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.153 11:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.153 11:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.153 11:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.153 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.153 "name": "Existed_Raid", 00:10:37.153 "uuid": "15233510-eb3e-4021-a50c-bf5bdd49c06a", 00:10:37.153 "strip_size_kb": 64, 00:10:37.153 "state": "configuring", 00:10:37.153 "raid_level": "concat", 00:10:37.153 "superblock": true, 00:10:37.153 "num_base_bdevs": 3, 00:10:37.153 "num_base_bdevs_discovered": 1, 00:10:37.153 "num_base_bdevs_operational": 3, 00:10:37.153 "base_bdevs_list": [ 00:10:37.153 { 00:10:37.153 "name": "BaseBdev1", 00:10:37.153 "uuid": "daa1a5c5-9208-49a7-acb2-59f5b118032b", 00:10:37.153 "is_configured": true, 00:10:37.153 "data_offset": 2048, 00:10:37.153 "data_size": 63488 00:10:37.153 }, 00:10:37.153 { 00:10:37.153 "name": "BaseBdev2", 00:10:37.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.153 "is_configured": false, 00:10:37.153 "data_offset": 0, 00:10:37.153 "data_size": 0 00:10:37.153 }, 00:10:37.153 { 00:10:37.153 "name": "BaseBdev3", 00:10:37.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.153 "is_configured": false, 00:10:37.153 "data_offset": 0, 00:10:37.153 "data_size": 0 00:10:37.153 } 00:10:37.153 ] 00:10:37.153 }' 00:10:37.153 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.153 11:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.721 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:37.721 11:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.721 11:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.721 [2024-11-20 11:20:20.529223] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:37.721 [2024-11-20 11:20:20.529281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:37.721 11:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.721 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:37.721 11:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.721 11:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.721 [2024-11-20 11:20:20.537270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:37.721 [2024-11-20 11:20:20.539283] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:37.721 [2024-11-20 11:20:20.539367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:37.721 [2024-11-20 11:20:20.539403] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:37.721 [2024-11-20 11:20:20.539429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:37.721 11:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.721 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:37.721 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:37.721 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:37.721 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.721 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.721 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.721 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.722 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.722 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.722 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.722 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.722 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.722 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.722 11:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.722 11:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.722 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.722 11:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.722 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.722 "name": "Existed_Raid", 00:10:37.722 "uuid": "497e7fee-0a7f-4157-8bf3-855d1c32bb3b", 00:10:37.722 "strip_size_kb": 64, 00:10:37.722 "state": "configuring", 00:10:37.722 "raid_level": "concat", 00:10:37.722 "superblock": true, 00:10:37.722 "num_base_bdevs": 3, 00:10:37.722 "num_base_bdevs_discovered": 1, 00:10:37.722 "num_base_bdevs_operational": 3, 00:10:37.722 "base_bdevs_list": [ 00:10:37.722 { 00:10:37.722 "name": "BaseBdev1", 00:10:37.722 "uuid": "daa1a5c5-9208-49a7-acb2-59f5b118032b", 00:10:37.722 "is_configured": true, 00:10:37.722 "data_offset": 2048, 00:10:37.722 "data_size": 63488 00:10:37.722 }, 00:10:37.722 { 00:10:37.722 "name": "BaseBdev2", 00:10:37.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.722 "is_configured": false, 00:10:37.722 "data_offset": 0, 00:10:37.722 "data_size": 0 00:10:37.722 }, 00:10:37.722 { 00:10:37.722 "name": "BaseBdev3", 00:10:37.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.722 "is_configured": false, 00:10:37.722 "data_offset": 0, 00:10:37.722 "data_size": 0 00:10:37.722 } 00:10:37.722 ] 00:10:37.722 }' 00:10:37.722 11:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.722 11:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.981 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:37.982 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.982 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.982 [2024-11-20 11:20:21.077378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:37.982 BaseBdev2 00:10:37.982 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.982 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:37.982 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:37.982 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.982 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:37.982 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.982 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.982 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.982 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.982 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.982 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.982 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:37.982 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.982 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.242 [ 00:10:38.242 { 00:10:38.242 "name": "BaseBdev2", 00:10:38.242 "aliases": [ 00:10:38.242 "cb0324f7-7aa5-4f37-a39d-09eb97820993" 00:10:38.242 ], 00:10:38.242 "product_name": "Malloc disk", 00:10:38.242 "block_size": 512, 00:10:38.242 "num_blocks": 65536, 00:10:38.242 "uuid": "cb0324f7-7aa5-4f37-a39d-09eb97820993", 00:10:38.242 "assigned_rate_limits": { 00:10:38.242 "rw_ios_per_sec": 0, 00:10:38.242 "rw_mbytes_per_sec": 0, 00:10:38.242 "r_mbytes_per_sec": 0, 00:10:38.242 "w_mbytes_per_sec": 0 00:10:38.242 }, 00:10:38.242 "claimed": true, 00:10:38.242 "claim_type": "exclusive_write", 00:10:38.242 "zoned": false, 00:10:38.242 "supported_io_types": { 00:10:38.242 "read": true, 00:10:38.242 "write": true, 00:10:38.242 "unmap": true, 00:10:38.242 "flush": true, 00:10:38.242 "reset": true, 00:10:38.242 "nvme_admin": false, 00:10:38.242 "nvme_io": false, 00:10:38.242 "nvme_io_md": false, 00:10:38.242 "write_zeroes": true, 00:10:38.242 "zcopy": true, 00:10:38.242 "get_zone_info": false, 00:10:38.242 "zone_management": false, 00:10:38.242 "zone_append": false, 00:10:38.242 "compare": false, 00:10:38.242 "compare_and_write": false, 00:10:38.242 "abort": true, 00:10:38.242 "seek_hole": false, 00:10:38.242 "seek_data": false, 00:10:38.242 "copy": true, 00:10:38.242 "nvme_iov_md": false 00:10:38.242 }, 00:10:38.242 "memory_domains": [ 00:10:38.242 { 00:10:38.242 "dma_device_id": "system", 00:10:38.242 "dma_device_type": 1 00:10:38.242 }, 00:10:38.242 { 00:10:38.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.242 "dma_device_type": 2 00:10:38.242 } 00:10:38.242 ], 00:10:38.242 "driver_specific": {} 00:10:38.242 } 00:10:38.242 ] 00:10:38.242 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.242 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:38.242 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:38.242 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:38.242 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:38.242 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.242 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.242 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.242 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.242 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.242 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.242 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.242 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.242 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.242 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.242 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.242 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.242 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.242 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.242 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.242 "name": "Existed_Raid", 00:10:38.242 "uuid": "497e7fee-0a7f-4157-8bf3-855d1c32bb3b", 00:10:38.242 "strip_size_kb": 64, 00:10:38.242 "state": "configuring", 00:10:38.242 "raid_level": "concat", 00:10:38.242 "superblock": true, 00:10:38.242 "num_base_bdevs": 3, 00:10:38.242 "num_base_bdevs_discovered": 2, 00:10:38.242 "num_base_bdevs_operational": 3, 00:10:38.242 "base_bdevs_list": [ 00:10:38.242 { 00:10:38.242 "name": "BaseBdev1", 00:10:38.242 "uuid": "daa1a5c5-9208-49a7-acb2-59f5b118032b", 00:10:38.242 "is_configured": true, 00:10:38.242 "data_offset": 2048, 00:10:38.242 "data_size": 63488 00:10:38.242 }, 00:10:38.242 { 00:10:38.242 "name": "BaseBdev2", 00:10:38.242 "uuid": "cb0324f7-7aa5-4f37-a39d-09eb97820993", 00:10:38.242 "is_configured": true, 00:10:38.242 "data_offset": 2048, 00:10:38.242 "data_size": 63488 00:10:38.242 }, 00:10:38.242 { 00:10:38.242 "name": "BaseBdev3", 00:10:38.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.242 "is_configured": false, 00:10:38.242 "data_offset": 0, 00:10:38.242 "data_size": 0 00:10:38.242 } 00:10:38.242 ] 00:10:38.242 }' 00:10:38.242 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.242 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.500 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:38.500 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.500 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.500 [2024-11-20 11:20:21.595321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:38.500 [2024-11-20 11:20:21.595707] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:38.500 [2024-11-20 11:20:21.595771] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:38.500 [2024-11-20 11:20:21.596214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:38.500 BaseBdev3 00:10:38.500 [2024-11-20 11:20:21.596428] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:38.500 [2024-11-20 11:20:21.596487] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:38.500 [2024-11-20 11:20:21.596675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.500 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.500 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:38.500 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:38.501 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.501 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:38.501 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.501 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.501 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.501 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.501 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.501 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.501 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:38.501 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.501 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.760 [ 00:10:38.760 { 00:10:38.760 "name": "BaseBdev3", 00:10:38.760 "aliases": [ 00:10:38.760 "63e13106-bc49-4845-9229-dfd66e0f7b0e" 00:10:38.760 ], 00:10:38.760 "product_name": "Malloc disk", 00:10:38.760 "block_size": 512, 00:10:38.760 "num_blocks": 65536, 00:10:38.760 "uuid": "63e13106-bc49-4845-9229-dfd66e0f7b0e", 00:10:38.760 "assigned_rate_limits": { 00:10:38.760 "rw_ios_per_sec": 0, 00:10:38.760 "rw_mbytes_per_sec": 0, 00:10:38.760 "r_mbytes_per_sec": 0, 00:10:38.760 "w_mbytes_per_sec": 0 00:10:38.760 }, 00:10:38.760 "claimed": true, 00:10:38.760 "claim_type": "exclusive_write", 00:10:38.760 "zoned": false, 00:10:38.760 "supported_io_types": { 00:10:38.760 "read": true, 00:10:38.760 "write": true, 00:10:38.760 "unmap": true, 00:10:38.760 "flush": true, 00:10:38.760 "reset": true, 00:10:38.760 "nvme_admin": false, 00:10:38.760 "nvme_io": false, 00:10:38.760 "nvme_io_md": false, 00:10:38.760 "write_zeroes": true, 00:10:38.760 "zcopy": true, 00:10:38.760 "get_zone_info": false, 00:10:38.760 "zone_management": false, 00:10:38.760 "zone_append": false, 00:10:38.760 "compare": false, 00:10:38.760 "compare_and_write": false, 00:10:38.760 "abort": true, 00:10:38.760 "seek_hole": false, 00:10:38.760 "seek_data": false, 00:10:38.760 "copy": true, 00:10:38.760 "nvme_iov_md": false 00:10:38.760 }, 00:10:38.760 "memory_domains": [ 00:10:38.760 { 00:10:38.760 "dma_device_id": "system", 00:10:38.760 "dma_device_type": 1 00:10:38.760 }, 00:10:38.760 { 00:10:38.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.760 "dma_device_type": 2 00:10:38.760 } 00:10:38.760 ], 00:10:38.760 "driver_specific": {} 00:10:38.760 } 00:10:38.760 ] 00:10:38.760 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.760 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:38.760 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:38.760 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:38.760 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:38.760 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.760 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.760 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.760 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.760 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.760 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.760 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.760 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.760 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.760 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.760 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.760 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.760 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.760 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.760 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.760 "name": "Existed_Raid", 00:10:38.760 "uuid": "497e7fee-0a7f-4157-8bf3-855d1c32bb3b", 00:10:38.760 "strip_size_kb": 64, 00:10:38.760 "state": "online", 00:10:38.760 "raid_level": "concat", 00:10:38.760 "superblock": true, 00:10:38.760 "num_base_bdevs": 3, 00:10:38.760 "num_base_bdevs_discovered": 3, 00:10:38.760 "num_base_bdevs_operational": 3, 00:10:38.760 "base_bdevs_list": [ 00:10:38.760 { 00:10:38.760 "name": "BaseBdev1", 00:10:38.760 "uuid": "daa1a5c5-9208-49a7-acb2-59f5b118032b", 00:10:38.760 "is_configured": true, 00:10:38.760 "data_offset": 2048, 00:10:38.760 "data_size": 63488 00:10:38.760 }, 00:10:38.760 { 00:10:38.760 "name": "BaseBdev2", 00:10:38.760 "uuid": "cb0324f7-7aa5-4f37-a39d-09eb97820993", 00:10:38.760 "is_configured": true, 00:10:38.760 "data_offset": 2048, 00:10:38.760 "data_size": 63488 00:10:38.760 }, 00:10:38.760 { 00:10:38.760 "name": "BaseBdev3", 00:10:38.760 "uuid": "63e13106-bc49-4845-9229-dfd66e0f7b0e", 00:10:38.760 "is_configured": true, 00:10:38.760 "data_offset": 2048, 00:10:38.760 "data_size": 63488 00:10:38.760 } 00:10:38.760 ] 00:10:38.760 }' 00:10:38.760 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.760 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.027 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:39.027 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:39.027 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:39.027 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:39.027 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:39.027 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:39.027 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:39.027 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:39.027 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.027 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.027 [2024-11-20 11:20:22.082876] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.027 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.027 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:39.027 "name": "Existed_Raid", 00:10:39.027 "aliases": [ 00:10:39.027 "497e7fee-0a7f-4157-8bf3-855d1c32bb3b" 00:10:39.027 ], 00:10:39.027 "product_name": "Raid Volume", 00:10:39.027 "block_size": 512, 00:10:39.027 "num_blocks": 190464, 00:10:39.027 "uuid": "497e7fee-0a7f-4157-8bf3-855d1c32bb3b", 00:10:39.027 "assigned_rate_limits": { 00:10:39.027 "rw_ios_per_sec": 0, 00:10:39.027 "rw_mbytes_per_sec": 0, 00:10:39.027 "r_mbytes_per_sec": 0, 00:10:39.027 "w_mbytes_per_sec": 0 00:10:39.027 }, 00:10:39.027 "claimed": false, 00:10:39.027 "zoned": false, 00:10:39.027 "supported_io_types": { 00:10:39.027 "read": true, 00:10:39.027 "write": true, 00:10:39.027 "unmap": true, 00:10:39.027 "flush": true, 00:10:39.027 "reset": true, 00:10:39.027 "nvme_admin": false, 00:10:39.027 "nvme_io": false, 00:10:39.027 "nvme_io_md": false, 00:10:39.027 "write_zeroes": true, 00:10:39.027 "zcopy": false, 00:10:39.027 "get_zone_info": false, 00:10:39.027 "zone_management": false, 00:10:39.027 "zone_append": false, 00:10:39.027 "compare": false, 00:10:39.027 "compare_and_write": false, 00:10:39.027 "abort": false, 00:10:39.027 "seek_hole": false, 00:10:39.027 "seek_data": false, 00:10:39.027 "copy": false, 00:10:39.027 "nvme_iov_md": false 00:10:39.027 }, 00:10:39.027 "memory_domains": [ 00:10:39.027 { 00:10:39.027 "dma_device_id": "system", 00:10:39.027 "dma_device_type": 1 00:10:39.027 }, 00:10:39.027 { 00:10:39.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.027 "dma_device_type": 2 00:10:39.027 }, 00:10:39.027 { 00:10:39.027 "dma_device_id": "system", 00:10:39.027 "dma_device_type": 1 00:10:39.027 }, 00:10:39.027 { 00:10:39.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.027 "dma_device_type": 2 00:10:39.027 }, 00:10:39.027 { 00:10:39.027 "dma_device_id": "system", 00:10:39.027 "dma_device_type": 1 00:10:39.027 }, 00:10:39.027 { 00:10:39.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.027 "dma_device_type": 2 00:10:39.027 } 00:10:39.027 ], 00:10:39.027 "driver_specific": { 00:10:39.027 "raid": { 00:10:39.027 "uuid": "497e7fee-0a7f-4157-8bf3-855d1c32bb3b", 00:10:39.027 "strip_size_kb": 64, 00:10:39.027 "state": "online", 00:10:39.027 "raid_level": "concat", 00:10:39.027 "superblock": true, 00:10:39.027 "num_base_bdevs": 3, 00:10:39.027 "num_base_bdevs_discovered": 3, 00:10:39.027 "num_base_bdevs_operational": 3, 00:10:39.027 "base_bdevs_list": [ 00:10:39.027 { 00:10:39.027 "name": "BaseBdev1", 00:10:39.027 "uuid": "daa1a5c5-9208-49a7-acb2-59f5b118032b", 00:10:39.027 "is_configured": true, 00:10:39.027 "data_offset": 2048, 00:10:39.027 "data_size": 63488 00:10:39.027 }, 00:10:39.027 { 00:10:39.027 "name": "BaseBdev2", 00:10:39.027 "uuid": "cb0324f7-7aa5-4f37-a39d-09eb97820993", 00:10:39.027 "is_configured": true, 00:10:39.027 "data_offset": 2048, 00:10:39.027 "data_size": 63488 00:10:39.027 }, 00:10:39.027 { 00:10:39.027 "name": "BaseBdev3", 00:10:39.027 "uuid": "63e13106-bc49-4845-9229-dfd66e0f7b0e", 00:10:39.027 "is_configured": true, 00:10:39.027 "data_offset": 2048, 00:10:39.027 "data_size": 63488 00:10:39.027 } 00:10:39.027 ] 00:10:39.027 } 00:10:39.027 } 00:10:39.027 }' 00:10:39.027 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:39.296 BaseBdev2 00:10:39.296 BaseBdev3' 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.296 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:39.297 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.297 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.297 [2024-11-20 11:20:22.342219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:39.297 [2024-11-20 11:20:22.342252] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:39.297 [2024-11-20 11:20:22.342310] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.557 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.557 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:39.557 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:39.557 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:39.557 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:39.557 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:39.557 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:39.557 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.557 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:39.557 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.557 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.557 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:39.557 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.557 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.557 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.557 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.557 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.557 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.557 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.557 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.557 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.557 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.557 "name": "Existed_Raid", 00:10:39.557 "uuid": "497e7fee-0a7f-4157-8bf3-855d1c32bb3b", 00:10:39.557 "strip_size_kb": 64, 00:10:39.557 "state": "offline", 00:10:39.557 "raid_level": "concat", 00:10:39.557 "superblock": true, 00:10:39.557 "num_base_bdevs": 3, 00:10:39.557 "num_base_bdevs_discovered": 2, 00:10:39.557 "num_base_bdevs_operational": 2, 00:10:39.557 "base_bdevs_list": [ 00:10:39.557 { 00:10:39.557 "name": null, 00:10:39.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.557 "is_configured": false, 00:10:39.557 "data_offset": 0, 00:10:39.557 "data_size": 63488 00:10:39.557 }, 00:10:39.557 { 00:10:39.557 "name": "BaseBdev2", 00:10:39.557 "uuid": "cb0324f7-7aa5-4f37-a39d-09eb97820993", 00:10:39.557 "is_configured": true, 00:10:39.557 "data_offset": 2048, 00:10:39.557 "data_size": 63488 00:10:39.557 }, 00:10:39.557 { 00:10:39.557 "name": "BaseBdev3", 00:10:39.557 "uuid": "63e13106-bc49-4845-9229-dfd66e0f7b0e", 00:10:39.557 "is_configured": true, 00:10:39.557 "data_offset": 2048, 00:10:39.557 "data_size": 63488 00:10:39.557 } 00:10:39.557 ] 00:10:39.557 }' 00:10:39.557 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.557 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.817 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:39.817 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:39.817 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:39.817 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.817 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.817 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.817 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.817 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:39.817 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:39.817 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:39.817 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.817 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.817 [2024-11-20 11:20:22.931310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:40.077 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.077 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:40.077 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:40.077 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.077 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.077 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.077 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:40.077 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.077 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:40.077 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:40.077 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:40.077 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.077 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.077 [2024-11-20 11:20:23.091085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:40.077 [2024-11-20 11:20:23.091142] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 BaseBdev2 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 [ 00:10:40.337 { 00:10:40.337 "name": "BaseBdev2", 00:10:40.337 "aliases": [ 00:10:40.337 "7ce67c95-4e88-41ae-be87-6fa6df40ce5e" 00:10:40.337 ], 00:10:40.337 "product_name": "Malloc disk", 00:10:40.337 "block_size": 512, 00:10:40.337 "num_blocks": 65536, 00:10:40.337 "uuid": "7ce67c95-4e88-41ae-be87-6fa6df40ce5e", 00:10:40.337 "assigned_rate_limits": { 00:10:40.337 "rw_ios_per_sec": 0, 00:10:40.337 "rw_mbytes_per_sec": 0, 00:10:40.337 "r_mbytes_per_sec": 0, 00:10:40.337 "w_mbytes_per_sec": 0 00:10:40.337 }, 00:10:40.337 "claimed": false, 00:10:40.337 "zoned": false, 00:10:40.337 "supported_io_types": { 00:10:40.337 "read": true, 00:10:40.337 "write": true, 00:10:40.337 "unmap": true, 00:10:40.337 "flush": true, 00:10:40.337 "reset": true, 00:10:40.337 "nvme_admin": false, 00:10:40.337 "nvme_io": false, 00:10:40.337 "nvme_io_md": false, 00:10:40.337 "write_zeroes": true, 00:10:40.337 "zcopy": true, 00:10:40.337 "get_zone_info": false, 00:10:40.337 "zone_management": false, 00:10:40.337 "zone_append": false, 00:10:40.337 "compare": false, 00:10:40.337 "compare_and_write": false, 00:10:40.337 "abort": true, 00:10:40.337 "seek_hole": false, 00:10:40.337 "seek_data": false, 00:10:40.337 "copy": true, 00:10:40.337 "nvme_iov_md": false 00:10:40.337 }, 00:10:40.337 "memory_domains": [ 00:10:40.337 { 00:10:40.337 "dma_device_id": "system", 00:10:40.337 "dma_device_type": 1 00:10:40.337 }, 00:10:40.337 { 00:10:40.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.337 "dma_device_type": 2 00:10:40.337 } 00:10:40.337 ], 00:10:40.337 "driver_specific": {} 00:10:40.337 } 00:10:40.337 ] 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 BaseBdev3 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:40.337 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.338 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.338 [ 00:10:40.338 { 00:10:40.338 "name": "BaseBdev3", 00:10:40.338 "aliases": [ 00:10:40.338 "538da96c-2b75-4e79-8b85-f19d320d9dd0" 00:10:40.338 ], 00:10:40.338 "product_name": "Malloc disk", 00:10:40.338 "block_size": 512, 00:10:40.338 "num_blocks": 65536, 00:10:40.338 "uuid": "538da96c-2b75-4e79-8b85-f19d320d9dd0", 00:10:40.338 "assigned_rate_limits": { 00:10:40.338 "rw_ios_per_sec": 0, 00:10:40.338 "rw_mbytes_per_sec": 0, 00:10:40.338 "r_mbytes_per_sec": 0, 00:10:40.338 "w_mbytes_per_sec": 0 00:10:40.338 }, 00:10:40.338 "claimed": false, 00:10:40.338 "zoned": false, 00:10:40.338 "supported_io_types": { 00:10:40.338 "read": true, 00:10:40.338 "write": true, 00:10:40.338 "unmap": true, 00:10:40.338 "flush": true, 00:10:40.338 "reset": true, 00:10:40.338 "nvme_admin": false, 00:10:40.338 "nvme_io": false, 00:10:40.338 "nvme_io_md": false, 00:10:40.338 "write_zeroes": true, 00:10:40.338 "zcopy": true, 00:10:40.338 "get_zone_info": false, 00:10:40.338 "zone_management": false, 00:10:40.338 "zone_append": false, 00:10:40.338 "compare": false, 00:10:40.338 "compare_and_write": false, 00:10:40.338 "abort": true, 00:10:40.338 "seek_hole": false, 00:10:40.338 "seek_data": false, 00:10:40.338 "copy": true, 00:10:40.338 "nvme_iov_md": false 00:10:40.338 }, 00:10:40.338 "memory_domains": [ 00:10:40.338 { 00:10:40.338 "dma_device_id": "system", 00:10:40.338 "dma_device_type": 1 00:10:40.338 }, 00:10:40.338 { 00:10:40.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.338 "dma_device_type": 2 00:10:40.338 } 00:10:40.338 ], 00:10:40.338 "driver_specific": {} 00:10:40.338 } 00:10:40.338 ] 00:10:40.338 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.338 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:40.338 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:40.338 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:40.338 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:40.338 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.338 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.338 [2024-11-20 11:20:23.410804] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:40.338 [2024-11-20 11:20:23.410904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:40.338 [2024-11-20 11:20:23.410957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.338 [2024-11-20 11:20:23.412927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:40.338 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.338 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:40.338 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.338 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.338 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.338 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.338 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.338 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.338 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.338 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.338 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.338 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.338 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.338 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.338 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.338 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.597 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.597 "name": "Existed_Raid", 00:10:40.597 "uuid": "584ab98d-08fa-461b-9152-255e8cf9c22c", 00:10:40.597 "strip_size_kb": 64, 00:10:40.597 "state": "configuring", 00:10:40.597 "raid_level": "concat", 00:10:40.597 "superblock": true, 00:10:40.597 "num_base_bdevs": 3, 00:10:40.597 "num_base_bdevs_discovered": 2, 00:10:40.597 "num_base_bdevs_operational": 3, 00:10:40.597 "base_bdevs_list": [ 00:10:40.597 { 00:10:40.597 "name": "BaseBdev1", 00:10:40.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.597 "is_configured": false, 00:10:40.597 "data_offset": 0, 00:10:40.597 "data_size": 0 00:10:40.597 }, 00:10:40.597 { 00:10:40.597 "name": "BaseBdev2", 00:10:40.597 "uuid": "7ce67c95-4e88-41ae-be87-6fa6df40ce5e", 00:10:40.597 "is_configured": true, 00:10:40.597 "data_offset": 2048, 00:10:40.597 "data_size": 63488 00:10:40.597 }, 00:10:40.597 { 00:10:40.597 "name": "BaseBdev3", 00:10:40.597 "uuid": "538da96c-2b75-4e79-8b85-f19d320d9dd0", 00:10:40.597 "is_configured": true, 00:10:40.597 "data_offset": 2048, 00:10:40.597 "data_size": 63488 00:10:40.597 } 00:10:40.597 ] 00:10:40.597 }' 00:10:40.597 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.597 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.856 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:40.856 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.856 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.856 [2024-11-20 11:20:23.866036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:40.856 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.856 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:40.856 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.856 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.856 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.856 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.856 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.856 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.856 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.856 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.856 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.856 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.856 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.856 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.856 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.856 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.856 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.856 "name": "Existed_Raid", 00:10:40.856 "uuid": "584ab98d-08fa-461b-9152-255e8cf9c22c", 00:10:40.856 "strip_size_kb": 64, 00:10:40.856 "state": "configuring", 00:10:40.856 "raid_level": "concat", 00:10:40.856 "superblock": true, 00:10:40.856 "num_base_bdevs": 3, 00:10:40.856 "num_base_bdevs_discovered": 1, 00:10:40.856 "num_base_bdevs_operational": 3, 00:10:40.856 "base_bdevs_list": [ 00:10:40.856 { 00:10:40.856 "name": "BaseBdev1", 00:10:40.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.856 "is_configured": false, 00:10:40.856 "data_offset": 0, 00:10:40.856 "data_size": 0 00:10:40.856 }, 00:10:40.856 { 00:10:40.856 "name": null, 00:10:40.856 "uuid": "7ce67c95-4e88-41ae-be87-6fa6df40ce5e", 00:10:40.856 "is_configured": false, 00:10:40.856 "data_offset": 0, 00:10:40.856 "data_size": 63488 00:10:40.856 }, 00:10:40.856 { 00:10:40.856 "name": "BaseBdev3", 00:10:40.856 "uuid": "538da96c-2b75-4e79-8b85-f19d320d9dd0", 00:10:40.856 "is_configured": true, 00:10:40.856 "data_offset": 2048, 00:10:40.856 "data_size": 63488 00:10:40.856 } 00:10:40.856 ] 00:10:40.856 }' 00:10:40.856 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.856 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.424 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:41.424 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.424 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.424 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.424 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.424 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:41.424 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:41.424 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.424 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.424 [2024-11-20 11:20:24.373005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:41.424 BaseBdev1 00:10:41.424 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.424 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:41.424 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:41.424 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.424 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:41.424 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.424 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.424 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.424 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.424 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.424 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.424 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:41.424 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.424 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.424 [ 00:10:41.424 { 00:10:41.424 "name": "BaseBdev1", 00:10:41.424 "aliases": [ 00:10:41.424 "035cc5c5-aab4-4c9e-9e0e-1605ab184e15" 00:10:41.424 ], 00:10:41.424 "product_name": "Malloc disk", 00:10:41.424 "block_size": 512, 00:10:41.424 "num_blocks": 65536, 00:10:41.424 "uuid": "035cc5c5-aab4-4c9e-9e0e-1605ab184e15", 00:10:41.424 "assigned_rate_limits": { 00:10:41.424 "rw_ios_per_sec": 0, 00:10:41.424 "rw_mbytes_per_sec": 0, 00:10:41.424 "r_mbytes_per_sec": 0, 00:10:41.424 "w_mbytes_per_sec": 0 00:10:41.424 }, 00:10:41.424 "claimed": true, 00:10:41.424 "claim_type": "exclusive_write", 00:10:41.424 "zoned": false, 00:10:41.424 "supported_io_types": { 00:10:41.424 "read": true, 00:10:41.424 "write": true, 00:10:41.424 "unmap": true, 00:10:41.424 "flush": true, 00:10:41.424 "reset": true, 00:10:41.424 "nvme_admin": false, 00:10:41.424 "nvme_io": false, 00:10:41.424 "nvme_io_md": false, 00:10:41.424 "write_zeroes": true, 00:10:41.424 "zcopy": true, 00:10:41.424 "get_zone_info": false, 00:10:41.424 "zone_management": false, 00:10:41.424 "zone_append": false, 00:10:41.424 "compare": false, 00:10:41.424 "compare_and_write": false, 00:10:41.424 "abort": true, 00:10:41.424 "seek_hole": false, 00:10:41.424 "seek_data": false, 00:10:41.424 "copy": true, 00:10:41.424 "nvme_iov_md": false 00:10:41.424 }, 00:10:41.424 "memory_domains": [ 00:10:41.424 { 00:10:41.424 "dma_device_id": "system", 00:10:41.425 "dma_device_type": 1 00:10:41.425 }, 00:10:41.425 { 00:10:41.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.425 "dma_device_type": 2 00:10:41.425 } 00:10:41.425 ], 00:10:41.425 "driver_specific": {} 00:10:41.425 } 00:10:41.425 ] 00:10:41.425 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.425 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:41.425 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:41.425 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.425 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.425 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.425 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.425 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.425 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.425 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.425 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.425 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.425 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.425 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.425 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.425 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.425 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.425 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.425 "name": "Existed_Raid", 00:10:41.425 "uuid": "584ab98d-08fa-461b-9152-255e8cf9c22c", 00:10:41.425 "strip_size_kb": 64, 00:10:41.425 "state": "configuring", 00:10:41.425 "raid_level": "concat", 00:10:41.425 "superblock": true, 00:10:41.425 "num_base_bdevs": 3, 00:10:41.425 "num_base_bdevs_discovered": 2, 00:10:41.425 "num_base_bdevs_operational": 3, 00:10:41.425 "base_bdevs_list": [ 00:10:41.425 { 00:10:41.425 "name": "BaseBdev1", 00:10:41.425 "uuid": "035cc5c5-aab4-4c9e-9e0e-1605ab184e15", 00:10:41.425 "is_configured": true, 00:10:41.425 "data_offset": 2048, 00:10:41.425 "data_size": 63488 00:10:41.425 }, 00:10:41.425 { 00:10:41.425 "name": null, 00:10:41.425 "uuid": "7ce67c95-4e88-41ae-be87-6fa6df40ce5e", 00:10:41.425 "is_configured": false, 00:10:41.425 "data_offset": 0, 00:10:41.425 "data_size": 63488 00:10:41.425 }, 00:10:41.425 { 00:10:41.425 "name": "BaseBdev3", 00:10:41.425 "uuid": "538da96c-2b75-4e79-8b85-f19d320d9dd0", 00:10:41.425 "is_configured": true, 00:10:41.425 "data_offset": 2048, 00:10:41.425 "data_size": 63488 00:10:41.425 } 00:10:41.425 ] 00:10:41.425 }' 00:10:41.425 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.425 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.013 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.013 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.013 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.013 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:42.013 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.013 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:42.013 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:42.013 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.013 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.013 [2024-11-20 11:20:24.972060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:42.013 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.013 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:42.013 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.013 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.013 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.013 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.013 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.013 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.013 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.013 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.013 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.013 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.013 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.013 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.013 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.013 11:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.013 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.013 "name": "Existed_Raid", 00:10:42.013 "uuid": "584ab98d-08fa-461b-9152-255e8cf9c22c", 00:10:42.013 "strip_size_kb": 64, 00:10:42.013 "state": "configuring", 00:10:42.013 "raid_level": "concat", 00:10:42.013 "superblock": true, 00:10:42.013 "num_base_bdevs": 3, 00:10:42.013 "num_base_bdevs_discovered": 1, 00:10:42.013 "num_base_bdevs_operational": 3, 00:10:42.013 "base_bdevs_list": [ 00:10:42.013 { 00:10:42.013 "name": "BaseBdev1", 00:10:42.013 "uuid": "035cc5c5-aab4-4c9e-9e0e-1605ab184e15", 00:10:42.013 "is_configured": true, 00:10:42.013 "data_offset": 2048, 00:10:42.013 "data_size": 63488 00:10:42.013 }, 00:10:42.013 { 00:10:42.013 "name": null, 00:10:42.013 "uuid": "7ce67c95-4e88-41ae-be87-6fa6df40ce5e", 00:10:42.013 "is_configured": false, 00:10:42.013 "data_offset": 0, 00:10:42.013 "data_size": 63488 00:10:42.013 }, 00:10:42.013 { 00:10:42.013 "name": null, 00:10:42.013 "uuid": "538da96c-2b75-4e79-8b85-f19d320d9dd0", 00:10:42.013 "is_configured": false, 00:10:42.013 "data_offset": 0, 00:10:42.013 "data_size": 63488 00:10:42.013 } 00:10:42.013 ] 00:10:42.013 }' 00:10:42.013 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.013 11:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.597 [2024-11-20 11:20:25.479245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.597 "name": "Existed_Raid", 00:10:42.597 "uuid": "584ab98d-08fa-461b-9152-255e8cf9c22c", 00:10:42.597 "strip_size_kb": 64, 00:10:42.597 "state": "configuring", 00:10:42.597 "raid_level": "concat", 00:10:42.597 "superblock": true, 00:10:42.597 "num_base_bdevs": 3, 00:10:42.597 "num_base_bdevs_discovered": 2, 00:10:42.597 "num_base_bdevs_operational": 3, 00:10:42.597 "base_bdevs_list": [ 00:10:42.597 { 00:10:42.597 "name": "BaseBdev1", 00:10:42.597 "uuid": "035cc5c5-aab4-4c9e-9e0e-1605ab184e15", 00:10:42.597 "is_configured": true, 00:10:42.597 "data_offset": 2048, 00:10:42.597 "data_size": 63488 00:10:42.597 }, 00:10:42.597 { 00:10:42.597 "name": null, 00:10:42.597 "uuid": "7ce67c95-4e88-41ae-be87-6fa6df40ce5e", 00:10:42.597 "is_configured": false, 00:10:42.597 "data_offset": 0, 00:10:42.597 "data_size": 63488 00:10:42.597 }, 00:10:42.597 { 00:10:42.597 "name": "BaseBdev3", 00:10:42.597 "uuid": "538da96c-2b75-4e79-8b85-f19d320d9dd0", 00:10:42.597 "is_configured": true, 00:10:42.597 "data_offset": 2048, 00:10:42.597 "data_size": 63488 00:10:42.597 } 00:10:42.597 ] 00:10:42.597 }' 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.597 11:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.855 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.855 11:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.855 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:42.855 11:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.855 11:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.855 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:42.855 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:42.855 11:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.855 11:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.855 [2024-11-20 11:20:25.958468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:43.114 11:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.114 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:43.114 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.114 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.114 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.114 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.114 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.114 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.114 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.114 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.114 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.114 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.114 11:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.114 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.114 11:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.114 11:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.114 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.114 "name": "Existed_Raid", 00:10:43.114 "uuid": "584ab98d-08fa-461b-9152-255e8cf9c22c", 00:10:43.114 "strip_size_kb": 64, 00:10:43.114 "state": "configuring", 00:10:43.114 "raid_level": "concat", 00:10:43.114 "superblock": true, 00:10:43.114 "num_base_bdevs": 3, 00:10:43.114 "num_base_bdevs_discovered": 1, 00:10:43.114 "num_base_bdevs_operational": 3, 00:10:43.114 "base_bdevs_list": [ 00:10:43.114 { 00:10:43.114 "name": null, 00:10:43.114 "uuid": "035cc5c5-aab4-4c9e-9e0e-1605ab184e15", 00:10:43.114 "is_configured": false, 00:10:43.114 "data_offset": 0, 00:10:43.114 "data_size": 63488 00:10:43.114 }, 00:10:43.114 { 00:10:43.114 "name": null, 00:10:43.114 "uuid": "7ce67c95-4e88-41ae-be87-6fa6df40ce5e", 00:10:43.114 "is_configured": false, 00:10:43.114 "data_offset": 0, 00:10:43.114 "data_size": 63488 00:10:43.114 }, 00:10:43.114 { 00:10:43.114 "name": "BaseBdev3", 00:10:43.114 "uuid": "538da96c-2b75-4e79-8b85-f19d320d9dd0", 00:10:43.114 "is_configured": true, 00:10:43.114 "data_offset": 2048, 00:10:43.114 "data_size": 63488 00:10:43.114 } 00:10:43.114 ] 00:10:43.114 }' 00:10:43.114 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.114 11:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.684 [2024-11-20 11:20:26.557946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.684 "name": "Existed_Raid", 00:10:43.684 "uuid": "584ab98d-08fa-461b-9152-255e8cf9c22c", 00:10:43.684 "strip_size_kb": 64, 00:10:43.684 "state": "configuring", 00:10:43.684 "raid_level": "concat", 00:10:43.684 "superblock": true, 00:10:43.684 "num_base_bdevs": 3, 00:10:43.684 "num_base_bdevs_discovered": 2, 00:10:43.684 "num_base_bdevs_operational": 3, 00:10:43.684 "base_bdevs_list": [ 00:10:43.684 { 00:10:43.684 "name": null, 00:10:43.684 "uuid": "035cc5c5-aab4-4c9e-9e0e-1605ab184e15", 00:10:43.684 "is_configured": false, 00:10:43.684 "data_offset": 0, 00:10:43.684 "data_size": 63488 00:10:43.684 }, 00:10:43.684 { 00:10:43.684 "name": "BaseBdev2", 00:10:43.684 "uuid": "7ce67c95-4e88-41ae-be87-6fa6df40ce5e", 00:10:43.684 "is_configured": true, 00:10:43.684 "data_offset": 2048, 00:10:43.684 "data_size": 63488 00:10:43.684 }, 00:10:43.684 { 00:10:43.684 "name": "BaseBdev3", 00:10:43.684 "uuid": "538da96c-2b75-4e79-8b85-f19d320d9dd0", 00:10:43.684 "is_configured": true, 00:10:43.684 "data_offset": 2048, 00:10:43.684 "data_size": 63488 00:10:43.684 } 00:10:43.684 ] 00:10:43.684 }' 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.684 11:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.944 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.944 11:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.944 11:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.944 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:43.944 11:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.944 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:43.944 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:43.944 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.944 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.944 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.944 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.944 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 035cc5c5-aab4-4c9e-9e0e-1605ab184e15 00:10:43.944 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.944 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.204 [2024-11-20 11:20:27.098356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:44.204 [2024-11-20 11:20:27.098724] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:44.204 NewBaseBdev 00:10:44.204 [2024-11-20 11:20:27.098780] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:44.204 [2024-11-20 11:20:27.099087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:44.204 [2024-11-20 11:20:27.099259] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:44.204 [2024-11-20 11:20:27.099271] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:44.204 [2024-11-20 11:20:27.099445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.204 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.204 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:44.204 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:44.204 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.204 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:44.204 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.204 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.204 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.204 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.204 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.204 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.204 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:44.205 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.205 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.205 [ 00:10:44.205 { 00:10:44.205 "name": "NewBaseBdev", 00:10:44.205 "aliases": [ 00:10:44.205 "035cc5c5-aab4-4c9e-9e0e-1605ab184e15" 00:10:44.205 ], 00:10:44.205 "product_name": "Malloc disk", 00:10:44.205 "block_size": 512, 00:10:44.205 "num_blocks": 65536, 00:10:44.205 "uuid": "035cc5c5-aab4-4c9e-9e0e-1605ab184e15", 00:10:44.205 "assigned_rate_limits": { 00:10:44.205 "rw_ios_per_sec": 0, 00:10:44.205 "rw_mbytes_per_sec": 0, 00:10:44.205 "r_mbytes_per_sec": 0, 00:10:44.205 "w_mbytes_per_sec": 0 00:10:44.205 }, 00:10:44.205 "claimed": true, 00:10:44.205 "claim_type": "exclusive_write", 00:10:44.205 "zoned": false, 00:10:44.205 "supported_io_types": { 00:10:44.205 "read": true, 00:10:44.205 "write": true, 00:10:44.205 "unmap": true, 00:10:44.205 "flush": true, 00:10:44.205 "reset": true, 00:10:44.205 "nvme_admin": false, 00:10:44.205 "nvme_io": false, 00:10:44.205 "nvme_io_md": false, 00:10:44.205 "write_zeroes": true, 00:10:44.205 "zcopy": true, 00:10:44.205 "get_zone_info": false, 00:10:44.205 "zone_management": false, 00:10:44.205 "zone_append": false, 00:10:44.205 "compare": false, 00:10:44.205 "compare_and_write": false, 00:10:44.205 "abort": true, 00:10:44.205 "seek_hole": false, 00:10:44.205 "seek_data": false, 00:10:44.205 "copy": true, 00:10:44.205 "nvme_iov_md": false 00:10:44.205 }, 00:10:44.205 "memory_domains": [ 00:10:44.205 { 00:10:44.205 "dma_device_id": "system", 00:10:44.205 "dma_device_type": 1 00:10:44.205 }, 00:10:44.205 { 00:10:44.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.205 "dma_device_type": 2 00:10:44.205 } 00:10:44.205 ], 00:10:44.205 "driver_specific": {} 00:10:44.205 } 00:10:44.205 ] 00:10:44.205 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.205 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:44.205 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:44.205 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.205 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.205 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.205 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.205 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.205 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.205 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.205 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.205 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.205 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.205 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.205 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.205 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.205 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.205 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.205 "name": "Existed_Raid", 00:10:44.205 "uuid": "584ab98d-08fa-461b-9152-255e8cf9c22c", 00:10:44.205 "strip_size_kb": 64, 00:10:44.205 "state": "online", 00:10:44.205 "raid_level": "concat", 00:10:44.205 "superblock": true, 00:10:44.205 "num_base_bdevs": 3, 00:10:44.205 "num_base_bdevs_discovered": 3, 00:10:44.205 "num_base_bdevs_operational": 3, 00:10:44.205 "base_bdevs_list": [ 00:10:44.205 { 00:10:44.205 "name": "NewBaseBdev", 00:10:44.205 "uuid": "035cc5c5-aab4-4c9e-9e0e-1605ab184e15", 00:10:44.205 "is_configured": true, 00:10:44.205 "data_offset": 2048, 00:10:44.205 "data_size": 63488 00:10:44.205 }, 00:10:44.205 { 00:10:44.205 "name": "BaseBdev2", 00:10:44.205 "uuid": "7ce67c95-4e88-41ae-be87-6fa6df40ce5e", 00:10:44.205 "is_configured": true, 00:10:44.205 "data_offset": 2048, 00:10:44.205 "data_size": 63488 00:10:44.205 }, 00:10:44.205 { 00:10:44.205 "name": "BaseBdev3", 00:10:44.205 "uuid": "538da96c-2b75-4e79-8b85-f19d320d9dd0", 00:10:44.205 "is_configured": true, 00:10:44.205 "data_offset": 2048, 00:10:44.205 "data_size": 63488 00:10:44.205 } 00:10:44.205 ] 00:10:44.205 }' 00:10:44.205 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.205 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.774 [2024-11-20 11:20:27.657850] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:44.774 "name": "Existed_Raid", 00:10:44.774 "aliases": [ 00:10:44.774 "584ab98d-08fa-461b-9152-255e8cf9c22c" 00:10:44.774 ], 00:10:44.774 "product_name": "Raid Volume", 00:10:44.774 "block_size": 512, 00:10:44.774 "num_blocks": 190464, 00:10:44.774 "uuid": "584ab98d-08fa-461b-9152-255e8cf9c22c", 00:10:44.774 "assigned_rate_limits": { 00:10:44.774 "rw_ios_per_sec": 0, 00:10:44.774 "rw_mbytes_per_sec": 0, 00:10:44.774 "r_mbytes_per_sec": 0, 00:10:44.774 "w_mbytes_per_sec": 0 00:10:44.774 }, 00:10:44.774 "claimed": false, 00:10:44.774 "zoned": false, 00:10:44.774 "supported_io_types": { 00:10:44.774 "read": true, 00:10:44.774 "write": true, 00:10:44.774 "unmap": true, 00:10:44.774 "flush": true, 00:10:44.774 "reset": true, 00:10:44.774 "nvme_admin": false, 00:10:44.774 "nvme_io": false, 00:10:44.774 "nvme_io_md": false, 00:10:44.774 "write_zeroes": true, 00:10:44.774 "zcopy": false, 00:10:44.774 "get_zone_info": false, 00:10:44.774 "zone_management": false, 00:10:44.774 "zone_append": false, 00:10:44.774 "compare": false, 00:10:44.774 "compare_and_write": false, 00:10:44.774 "abort": false, 00:10:44.774 "seek_hole": false, 00:10:44.774 "seek_data": false, 00:10:44.774 "copy": false, 00:10:44.774 "nvme_iov_md": false 00:10:44.774 }, 00:10:44.774 "memory_domains": [ 00:10:44.774 { 00:10:44.774 "dma_device_id": "system", 00:10:44.774 "dma_device_type": 1 00:10:44.774 }, 00:10:44.774 { 00:10:44.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.774 "dma_device_type": 2 00:10:44.774 }, 00:10:44.774 { 00:10:44.774 "dma_device_id": "system", 00:10:44.774 "dma_device_type": 1 00:10:44.774 }, 00:10:44.774 { 00:10:44.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.774 "dma_device_type": 2 00:10:44.774 }, 00:10:44.774 { 00:10:44.774 "dma_device_id": "system", 00:10:44.774 "dma_device_type": 1 00:10:44.774 }, 00:10:44.774 { 00:10:44.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.774 "dma_device_type": 2 00:10:44.774 } 00:10:44.774 ], 00:10:44.774 "driver_specific": { 00:10:44.774 "raid": { 00:10:44.774 "uuid": "584ab98d-08fa-461b-9152-255e8cf9c22c", 00:10:44.774 "strip_size_kb": 64, 00:10:44.774 "state": "online", 00:10:44.774 "raid_level": "concat", 00:10:44.774 "superblock": true, 00:10:44.774 "num_base_bdevs": 3, 00:10:44.774 "num_base_bdevs_discovered": 3, 00:10:44.774 "num_base_bdevs_operational": 3, 00:10:44.774 "base_bdevs_list": [ 00:10:44.774 { 00:10:44.774 "name": "NewBaseBdev", 00:10:44.774 "uuid": "035cc5c5-aab4-4c9e-9e0e-1605ab184e15", 00:10:44.774 "is_configured": true, 00:10:44.774 "data_offset": 2048, 00:10:44.774 "data_size": 63488 00:10:44.774 }, 00:10:44.774 { 00:10:44.774 "name": "BaseBdev2", 00:10:44.774 "uuid": "7ce67c95-4e88-41ae-be87-6fa6df40ce5e", 00:10:44.774 "is_configured": true, 00:10:44.774 "data_offset": 2048, 00:10:44.774 "data_size": 63488 00:10:44.774 }, 00:10:44.774 { 00:10:44.774 "name": "BaseBdev3", 00:10:44.774 "uuid": "538da96c-2b75-4e79-8b85-f19d320d9dd0", 00:10:44.774 "is_configured": true, 00:10:44.774 "data_offset": 2048, 00:10:44.774 "data_size": 63488 00:10:44.774 } 00:10:44.774 ] 00:10:44.774 } 00:10:44.774 } 00:10:44.774 }' 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:44.774 BaseBdev2 00:10:44.774 BaseBdev3' 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.774 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:44.775 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.775 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.775 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.775 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.775 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.775 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.775 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.775 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.775 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:44.775 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.775 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.034 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.034 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.035 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.035 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:45.035 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.035 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.035 [2024-11-20 11:20:27.921014] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:45.035 [2024-11-20 11:20:27.921045] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.035 [2024-11-20 11:20:27.921149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.035 [2024-11-20 11:20:27.921208] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:45.035 [2024-11-20 11:20:27.921220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:45.035 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.035 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66361 00:10:45.035 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66361 ']' 00:10:45.035 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66361 00:10:45.035 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:45.035 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.035 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66361 00:10:45.035 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:45.035 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:45.035 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66361' 00:10:45.035 killing process with pid 66361 00:10:45.035 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66361 00:10:45.035 [2024-11-20 11:20:27.969772] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:45.035 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66361 00:10:45.294 [2024-11-20 11:20:28.294507] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:46.673 11:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:46.673 00:10:46.673 real 0m11.012s 00:10:46.673 user 0m17.525s 00:10:46.673 sys 0m1.893s 00:10:46.673 ************************************ 00:10:46.673 END TEST raid_state_function_test_sb 00:10:46.673 ************************************ 00:10:46.673 11:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.673 11:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.673 11:20:29 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:10:46.673 11:20:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:46.673 11:20:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.673 11:20:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:46.673 ************************************ 00:10:46.673 START TEST raid_superblock_test 00:10:46.673 ************************************ 00:10:46.673 11:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:10:46.673 11:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:46.673 11:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:46.673 11:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:46.673 11:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:46.673 11:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:46.673 11:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:46.673 11:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:46.673 11:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:46.674 11:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:46.674 11:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:46.674 11:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:46.674 11:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:46.674 11:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:46.674 11:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:46.674 11:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:46.674 11:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:46.674 11:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66991 00:10:46.674 11:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:46.674 11:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66991 00:10:46.674 11:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66991 ']' 00:10:46.674 11:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.674 11:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.674 11:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.674 11:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.674 11:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.674 [2024-11-20 11:20:29.658421] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:10:46.674 [2024-11-20 11:20:29.658643] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66991 ] 00:10:46.932 [2024-11-20 11:20:29.832911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.933 [2024-11-20 11:20:29.950236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.190 [2024-11-20 11:20:30.159148] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.190 [2024-11-20 11:20:30.159198] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.449 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.449 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:47.449 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:47.449 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:47.449 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:47.449 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:47.449 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:47.449 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:47.449 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:47.449 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:47.449 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:47.449 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.449 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.709 malloc1 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.709 [2024-11-20 11:20:30.578475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:47.709 [2024-11-20 11:20:30.578610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.709 [2024-11-20 11:20:30.578675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:47.709 [2024-11-20 11:20:30.578724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.709 [2024-11-20 11:20:30.581228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.709 [2024-11-20 11:20:30.581325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:47.709 pt1 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.709 malloc2 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.709 [2024-11-20 11:20:30.639440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:47.709 [2024-11-20 11:20:30.639590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.709 [2024-11-20 11:20:30.639636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:47.709 [2024-11-20 11:20:30.639681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.709 [2024-11-20 11:20:30.641957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.709 [2024-11-20 11:20:30.642035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:47.709 pt2 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.709 malloc3 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.709 [2024-11-20 11:20:30.719126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:47.709 [2024-11-20 11:20:30.719183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.709 [2024-11-20 11:20:30.719205] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:47.709 [2024-11-20 11:20:30.719216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.709 [2024-11-20 11:20:30.721453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.709 [2024-11-20 11:20:30.721517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:47.709 pt3 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.709 [2024-11-20 11:20:30.731147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:47.709 [2024-11-20 11:20:30.733103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:47.709 [2024-11-20 11:20:30.733171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:47.709 [2024-11-20 11:20:30.733341] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:47.709 [2024-11-20 11:20:30.733355] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:47.709 [2024-11-20 11:20:30.733649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:47.709 [2024-11-20 11:20:30.733854] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:47.709 [2024-11-20 11:20:30.733872] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:47.709 [2024-11-20 11:20:30.734048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.709 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.709 "name": "raid_bdev1", 00:10:47.709 "uuid": "dce5a427-a835-434e-972e-c8245bb5fdfa", 00:10:47.709 "strip_size_kb": 64, 00:10:47.709 "state": "online", 00:10:47.709 "raid_level": "concat", 00:10:47.709 "superblock": true, 00:10:47.709 "num_base_bdevs": 3, 00:10:47.709 "num_base_bdevs_discovered": 3, 00:10:47.710 "num_base_bdevs_operational": 3, 00:10:47.710 "base_bdevs_list": [ 00:10:47.710 { 00:10:47.710 "name": "pt1", 00:10:47.710 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:47.710 "is_configured": true, 00:10:47.710 "data_offset": 2048, 00:10:47.710 "data_size": 63488 00:10:47.710 }, 00:10:47.710 { 00:10:47.710 "name": "pt2", 00:10:47.710 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:47.710 "is_configured": true, 00:10:47.710 "data_offset": 2048, 00:10:47.710 "data_size": 63488 00:10:47.710 }, 00:10:47.710 { 00:10:47.710 "name": "pt3", 00:10:47.710 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:47.710 "is_configured": true, 00:10:47.710 "data_offset": 2048, 00:10:47.710 "data_size": 63488 00:10:47.710 } 00:10:47.710 ] 00:10:47.710 }' 00:10:47.710 11:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.710 11:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.277 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:48.277 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:48.277 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:48.277 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:48.277 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:48.277 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:48.277 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:48.277 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.277 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.277 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:48.277 [2024-11-20 11:20:31.182761] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:48.277 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.277 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:48.278 "name": "raid_bdev1", 00:10:48.278 "aliases": [ 00:10:48.278 "dce5a427-a835-434e-972e-c8245bb5fdfa" 00:10:48.278 ], 00:10:48.278 "product_name": "Raid Volume", 00:10:48.278 "block_size": 512, 00:10:48.278 "num_blocks": 190464, 00:10:48.278 "uuid": "dce5a427-a835-434e-972e-c8245bb5fdfa", 00:10:48.278 "assigned_rate_limits": { 00:10:48.278 "rw_ios_per_sec": 0, 00:10:48.278 "rw_mbytes_per_sec": 0, 00:10:48.278 "r_mbytes_per_sec": 0, 00:10:48.278 "w_mbytes_per_sec": 0 00:10:48.278 }, 00:10:48.278 "claimed": false, 00:10:48.278 "zoned": false, 00:10:48.278 "supported_io_types": { 00:10:48.278 "read": true, 00:10:48.278 "write": true, 00:10:48.278 "unmap": true, 00:10:48.278 "flush": true, 00:10:48.278 "reset": true, 00:10:48.278 "nvme_admin": false, 00:10:48.278 "nvme_io": false, 00:10:48.278 "nvme_io_md": false, 00:10:48.278 "write_zeroes": true, 00:10:48.278 "zcopy": false, 00:10:48.278 "get_zone_info": false, 00:10:48.278 "zone_management": false, 00:10:48.278 "zone_append": false, 00:10:48.278 "compare": false, 00:10:48.278 "compare_and_write": false, 00:10:48.278 "abort": false, 00:10:48.278 "seek_hole": false, 00:10:48.278 "seek_data": false, 00:10:48.278 "copy": false, 00:10:48.278 "nvme_iov_md": false 00:10:48.278 }, 00:10:48.278 "memory_domains": [ 00:10:48.278 { 00:10:48.278 "dma_device_id": "system", 00:10:48.278 "dma_device_type": 1 00:10:48.278 }, 00:10:48.278 { 00:10:48.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.278 "dma_device_type": 2 00:10:48.278 }, 00:10:48.278 { 00:10:48.278 "dma_device_id": "system", 00:10:48.278 "dma_device_type": 1 00:10:48.278 }, 00:10:48.278 { 00:10:48.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.278 "dma_device_type": 2 00:10:48.278 }, 00:10:48.278 { 00:10:48.278 "dma_device_id": "system", 00:10:48.278 "dma_device_type": 1 00:10:48.278 }, 00:10:48.278 { 00:10:48.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.278 "dma_device_type": 2 00:10:48.278 } 00:10:48.278 ], 00:10:48.278 "driver_specific": { 00:10:48.278 "raid": { 00:10:48.278 "uuid": "dce5a427-a835-434e-972e-c8245bb5fdfa", 00:10:48.278 "strip_size_kb": 64, 00:10:48.278 "state": "online", 00:10:48.278 "raid_level": "concat", 00:10:48.278 "superblock": true, 00:10:48.278 "num_base_bdevs": 3, 00:10:48.278 "num_base_bdevs_discovered": 3, 00:10:48.278 "num_base_bdevs_operational": 3, 00:10:48.278 "base_bdevs_list": [ 00:10:48.278 { 00:10:48.278 "name": "pt1", 00:10:48.278 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:48.278 "is_configured": true, 00:10:48.278 "data_offset": 2048, 00:10:48.278 "data_size": 63488 00:10:48.278 }, 00:10:48.278 { 00:10:48.278 "name": "pt2", 00:10:48.278 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:48.278 "is_configured": true, 00:10:48.278 "data_offset": 2048, 00:10:48.278 "data_size": 63488 00:10:48.278 }, 00:10:48.278 { 00:10:48.278 "name": "pt3", 00:10:48.278 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:48.278 "is_configured": true, 00:10:48.278 "data_offset": 2048, 00:10:48.278 "data_size": 63488 00:10:48.278 } 00:10:48.278 ] 00:10:48.278 } 00:10:48.278 } 00:10:48.278 }' 00:10:48.278 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:48.278 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:48.278 pt2 00:10:48.278 pt3' 00:10:48.278 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.278 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:48.278 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.278 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:48.278 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.278 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.278 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.278 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.278 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.278 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.278 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.278 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:48.278 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.278 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.278 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.278 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.538 [2024-11-20 11:20:31.466227] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=dce5a427-a835-434e-972e-c8245bb5fdfa 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z dce5a427-a835-434e-972e-c8245bb5fdfa ']' 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.538 [2024-11-20 11:20:31.513853] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:48.538 [2024-11-20 11:20:31.513891] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:48.538 [2024-11-20 11:20:31.513993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:48.538 [2024-11-20 11:20:31.514065] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:48.538 [2024-11-20 11:20:31.514075] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.538 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.538 [2024-11-20 11:20:31.649620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:48.538 [2024-11-20 11:20:31.651584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:48.799 [2024-11-20 11:20:31.651710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:48.799 [2024-11-20 11:20:31.651775] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:48.799 [2024-11-20 11:20:31.651833] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:48.799 [2024-11-20 11:20:31.651858] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:48.799 [2024-11-20 11:20:31.651879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:48.799 [2024-11-20 11:20:31.651890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:48.799 request: 00:10:48.799 { 00:10:48.799 "name": "raid_bdev1", 00:10:48.799 "raid_level": "concat", 00:10:48.799 "base_bdevs": [ 00:10:48.799 "malloc1", 00:10:48.799 "malloc2", 00:10:48.799 "malloc3" 00:10:48.799 ], 00:10:48.799 "strip_size_kb": 64, 00:10:48.799 "superblock": false, 00:10:48.799 "method": "bdev_raid_create", 00:10:48.799 "req_id": 1 00:10:48.799 } 00:10:48.799 Got JSON-RPC error response 00:10:48.799 response: 00:10:48.799 { 00:10:48.799 "code": -17, 00:10:48.799 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:48.799 } 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.799 [2024-11-20 11:20:31.717458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:48.799 [2024-11-20 11:20:31.717587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.799 [2024-11-20 11:20:31.717639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:48.799 [2024-11-20 11:20:31.717693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.799 [2024-11-20 11:20:31.720081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.799 [2024-11-20 11:20:31.720159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:48.799 [2024-11-20 11:20:31.720307] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:48.799 [2024-11-20 11:20:31.720402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:48.799 pt1 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.799 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.800 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.800 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.800 "name": "raid_bdev1", 00:10:48.800 "uuid": "dce5a427-a835-434e-972e-c8245bb5fdfa", 00:10:48.800 "strip_size_kb": 64, 00:10:48.800 "state": "configuring", 00:10:48.800 "raid_level": "concat", 00:10:48.800 "superblock": true, 00:10:48.800 "num_base_bdevs": 3, 00:10:48.800 "num_base_bdevs_discovered": 1, 00:10:48.800 "num_base_bdevs_operational": 3, 00:10:48.800 "base_bdevs_list": [ 00:10:48.800 { 00:10:48.800 "name": "pt1", 00:10:48.800 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:48.800 "is_configured": true, 00:10:48.800 "data_offset": 2048, 00:10:48.800 "data_size": 63488 00:10:48.800 }, 00:10:48.800 { 00:10:48.800 "name": null, 00:10:48.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:48.800 "is_configured": false, 00:10:48.800 "data_offset": 2048, 00:10:48.800 "data_size": 63488 00:10:48.800 }, 00:10:48.800 { 00:10:48.800 "name": null, 00:10:48.800 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:48.800 "is_configured": false, 00:10:48.800 "data_offset": 2048, 00:10:48.800 "data_size": 63488 00:10:48.800 } 00:10:48.800 ] 00:10:48.800 }' 00:10:48.800 11:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.800 11:20:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.060 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:49.060 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:49.060 11:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.060 11:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.060 [2024-11-20 11:20:32.140777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:49.060 [2024-11-20 11:20:32.140846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.060 [2024-11-20 11:20:32.140873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:49.060 [2024-11-20 11:20:32.140884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.060 [2024-11-20 11:20:32.141398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.060 [2024-11-20 11:20:32.141432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:49.060 [2024-11-20 11:20:32.141552] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:49.060 [2024-11-20 11:20:32.141584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:49.060 pt2 00:10:49.060 11:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.060 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:49.060 11:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.060 11:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.060 [2024-11-20 11:20:32.148767] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:49.060 11:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.060 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:49.060 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.060 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.060 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.060 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.060 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.060 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.060 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.060 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.060 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.060 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.060 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.060 11:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.060 11:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.320 11:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.320 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.320 "name": "raid_bdev1", 00:10:49.320 "uuid": "dce5a427-a835-434e-972e-c8245bb5fdfa", 00:10:49.320 "strip_size_kb": 64, 00:10:49.320 "state": "configuring", 00:10:49.320 "raid_level": "concat", 00:10:49.320 "superblock": true, 00:10:49.320 "num_base_bdevs": 3, 00:10:49.320 "num_base_bdevs_discovered": 1, 00:10:49.320 "num_base_bdevs_operational": 3, 00:10:49.320 "base_bdevs_list": [ 00:10:49.320 { 00:10:49.320 "name": "pt1", 00:10:49.320 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:49.320 "is_configured": true, 00:10:49.320 "data_offset": 2048, 00:10:49.320 "data_size": 63488 00:10:49.320 }, 00:10:49.320 { 00:10:49.320 "name": null, 00:10:49.320 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.320 "is_configured": false, 00:10:49.320 "data_offset": 0, 00:10:49.320 "data_size": 63488 00:10:49.320 }, 00:10:49.320 { 00:10:49.320 "name": null, 00:10:49.320 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:49.320 "is_configured": false, 00:10:49.320 "data_offset": 2048, 00:10:49.320 "data_size": 63488 00:10:49.320 } 00:10:49.320 ] 00:10:49.320 }' 00:10:49.320 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.320 11:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.580 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:49.580 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:49.580 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:49.580 11:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.580 11:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.580 [2024-11-20 11:20:32.619949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:49.580 [2024-11-20 11:20:32.620098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.580 [2024-11-20 11:20:32.620138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:49.580 [2024-11-20 11:20:32.620175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.580 [2024-11-20 11:20:32.620705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.580 [2024-11-20 11:20:32.620774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:49.580 [2024-11-20 11:20:32.620896] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:49.580 [2024-11-20 11:20:32.620953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:49.580 pt2 00:10:49.580 11:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.580 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:49.580 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:49.580 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:49.580 11:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.580 11:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.580 [2024-11-20 11:20:32.631880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:49.580 [2024-11-20 11:20:32.631926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.580 [2024-11-20 11:20:32.631941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:49.580 [2024-11-20 11:20:32.631950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.580 [2024-11-20 11:20:32.632348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.580 [2024-11-20 11:20:32.632370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:49.580 [2024-11-20 11:20:32.632438] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:49.581 [2024-11-20 11:20:32.632486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:49.581 [2024-11-20 11:20:32.632628] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:49.581 [2024-11-20 11:20:32.632641] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:49.581 [2024-11-20 11:20:32.632909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:49.581 [2024-11-20 11:20:32.633070] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:49.581 [2024-11-20 11:20:32.633079] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:49.581 [2024-11-20 11:20:32.633230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.581 pt3 00:10:49.581 11:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.581 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:49.581 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:49.581 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:49.581 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.581 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.581 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.581 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.581 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.581 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.581 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.581 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.581 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.581 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.581 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.581 11:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.581 11:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.581 11:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.581 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.581 "name": "raid_bdev1", 00:10:49.581 "uuid": "dce5a427-a835-434e-972e-c8245bb5fdfa", 00:10:49.581 "strip_size_kb": 64, 00:10:49.581 "state": "online", 00:10:49.581 "raid_level": "concat", 00:10:49.581 "superblock": true, 00:10:49.581 "num_base_bdevs": 3, 00:10:49.581 "num_base_bdevs_discovered": 3, 00:10:49.581 "num_base_bdevs_operational": 3, 00:10:49.581 "base_bdevs_list": [ 00:10:49.581 { 00:10:49.581 "name": "pt1", 00:10:49.581 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:49.581 "is_configured": true, 00:10:49.581 "data_offset": 2048, 00:10:49.581 "data_size": 63488 00:10:49.581 }, 00:10:49.581 { 00:10:49.581 "name": "pt2", 00:10:49.581 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.581 "is_configured": true, 00:10:49.581 "data_offset": 2048, 00:10:49.581 "data_size": 63488 00:10:49.581 }, 00:10:49.581 { 00:10:49.581 "name": "pt3", 00:10:49.581 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:49.581 "is_configured": true, 00:10:49.581 "data_offset": 2048, 00:10:49.581 "data_size": 63488 00:10:49.581 } 00:10:49.581 ] 00:10:49.581 }' 00:10:49.581 11:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.581 11:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.151 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:50.151 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:50.151 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:50.151 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:50.151 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:50.151 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:50.151 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:50.151 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:50.151 11:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.151 11:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.151 [2024-11-20 11:20:33.087554] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.151 11:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.151 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:50.151 "name": "raid_bdev1", 00:10:50.151 "aliases": [ 00:10:50.151 "dce5a427-a835-434e-972e-c8245bb5fdfa" 00:10:50.151 ], 00:10:50.151 "product_name": "Raid Volume", 00:10:50.151 "block_size": 512, 00:10:50.151 "num_blocks": 190464, 00:10:50.151 "uuid": "dce5a427-a835-434e-972e-c8245bb5fdfa", 00:10:50.151 "assigned_rate_limits": { 00:10:50.151 "rw_ios_per_sec": 0, 00:10:50.151 "rw_mbytes_per_sec": 0, 00:10:50.151 "r_mbytes_per_sec": 0, 00:10:50.151 "w_mbytes_per_sec": 0 00:10:50.151 }, 00:10:50.151 "claimed": false, 00:10:50.151 "zoned": false, 00:10:50.151 "supported_io_types": { 00:10:50.151 "read": true, 00:10:50.151 "write": true, 00:10:50.151 "unmap": true, 00:10:50.151 "flush": true, 00:10:50.151 "reset": true, 00:10:50.151 "nvme_admin": false, 00:10:50.151 "nvme_io": false, 00:10:50.151 "nvme_io_md": false, 00:10:50.151 "write_zeroes": true, 00:10:50.151 "zcopy": false, 00:10:50.151 "get_zone_info": false, 00:10:50.151 "zone_management": false, 00:10:50.151 "zone_append": false, 00:10:50.151 "compare": false, 00:10:50.151 "compare_and_write": false, 00:10:50.151 "abort": false, 00:10:50.151 "seek_hole": false, 00:10:50.151 "seek_data": false, 00:10:50.151 "copy": false, 00:10:50.151 "nvme_iov_md": false 00:10:50.151 }, 00:10:50.151 "memory_domains": [ 00:10:50.151 { 00:10:50.151 "dma_device_id": "system", 00:10:50.151 "dma_device_type": 1 00:10:50.151 }, 00:10:50.151 { 00:10:50.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.151 "dma_device_type": 2 00:10:50.151 }, 00:10:50.151 { 00:10:50.151 "dma_device_id": "system", 00:10:50.151 "dma_device_type": 1 00:10:50.151 }, 00:10:50.151 { 00:10:50.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.151 "dma_device_type": 2 00:10:50.151 }, 00:10:50.151 { 00:10:50.151 "dma_device_id": "system", 00:10:50.151 "dma_device_type": 1 00:10:50.151 }, 00:10:50.151 { 00:10:50.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.151 "dma_device_type": 2 00:10:50.151 } 00:10:50.151 ], 00:10:50.151 "driver_specific": { 00:10:50.151 "raid": { 00:10:50.151 "uuid": "dce5a427-a835-434e-972e-c8245bb5fdfa", 00:10:50.151 "strip_size_kb": 64, 00:10:50.151 "state": "online", 00:10:50.151 "raid_level": "concat", 00:10:50.151 "superblock": true, 00:10:50.151 "num_base_bdevs": 3, 00:10:50.151 "num_base_bdevs_discovered": 3, 00:10:50.151 "num_base_bdevs_operational": 3, 00:10:50.151 "base_bdevs_list": [ 00:10:50.151 { 00:10:50.151 "name": "pt1", 00:10:50.151 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.151 "is_configured": true, 00:10:50.151 "data_offset": 2048, 00:10:50.151 "data_size": 63488 00:10:50.151 }, 00:10:50.151 { 00:10:50.151 "name": "pt2", 00:10:50.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.151 "is_configured": true, 00:10:50.151 "data_offset": 2048, 00:10:50.151 "data_size": 63488 00:10:50.151 }, 00:10:50.151 { 00:10:50.151 "name": "pt3", 00:10:50.151 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.151 "is_configured": true, 00:10:50.151 "data_offset": 2048, 00:10:50.151 "data_size": 63488 00:10:50.151 } 00:10:50.151 ] 00:10:50.151 } 00:10:50.151 } 00:10:50.151 }' 00:10:50.151 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:50.151 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:50.151 pt2 00:10:50.151 pt3' 00:10:50.151 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.151 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:50.151 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.151 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:50.151 11:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.151 11:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.151 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.151 11:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.151 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.151 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.151 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.410 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:50.410 11:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.410 11:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.410 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.411 [2024-11-20 11:20:33.371058] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' dce5a427-a835-434e-972e-c8245bb5fdfa '!=' dce5a427-a835-434e-972e-c8245bb5fdfa ']' 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66991 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66991 ']' 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66991 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66991 00:10:50.411 killing process with pid 66991 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66991' 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66991 00:10:50.411 [2024-11-20 11:20:33.441094] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:50.411 [2024-11-20 11:20:33.441200] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.411 11:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66991 00:10:50.411 [2024-11-20 11:20:33.441267] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:50.411 [2024-11-20 11:20:33.441280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:50.670 [2024-11-20 11:20:33.754468] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:52.050 11:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:52.050 00:10:52.050 real 0m5.344s 00:10:52.050 user 0m7.717s 00:10:52.050 sys 0m0.866s 00:10:52.050 11:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.050 11:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.050 ************************************ 00:10:52.050 END TEST raid_superblock_test 00:10:52.050 ************************************ 00:10:52.050 11:20:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:10:52.050 11:20:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:52.050 11:20:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.050 11:20:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:52.050 ************************************ 00:10:52.050 START TEST raid_read_error_test 00:10:52.050 ************************************ 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IerWa4HW6F 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67243 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67243 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67243 ']' 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.050 11:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.050 11:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.050 11:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.050 11:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.050 [2024-11-20 11:20:35.082232] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:10:52.050 [2024-11-20 11:20:35.082358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67243 ] 00:10:52.309 [2024-11-20 11:20:35.259723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.310 [2024-11-20 11:20:35.374474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.569 [2024-11-20 11:20:35.578327] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.569 [2024-11-20 11:20:35.578510] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.138 11:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.138 11:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:53.138 11:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:53.138 11:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:53.138 11:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.138 11:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.138 BaseBdev1_malloc 00:10:53.138 11:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.138 11:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:53.138 11:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.138 11:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.138 true 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.138 [2024-11-20 11:20:36.016392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:53.138 [2024-11-20 11:20:36.016465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.138 [2024-11-20 11:20:36.016488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:53.138 [2024-11-20 11:20:36.016500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.138 [2024-11-20 11:20:36.018589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.138 [2024-11-20 11:20:36.018671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:53.138 BaseBdev1 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.138 BaseBdev2_malloc 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.138 true 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.138 [2024-11-20 11:20:36.088733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:53.138 [2024-11-20 11:20:36.088794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.138 [2024-11-20 11:20:36.088816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:53.138 [2024-11-20 11:20:36.088827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.138 [2024-11-20 11:20:36.091230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.138 [2024-11-20 11:20:36.091332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:53.138 BaseBdev2 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.138 BaseBdev3_malloc 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.138 true 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.138 [2024-11-20 11:20:36.172997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:53.138 [2024-11-20 11:20:36.173096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.138 [2024-11-20 11:20:36.173134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:53.138 [2024-11-20 11:20:36.173175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.138 [2024-11-20 11:20:36.175513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.138 [2024-11-20 11:20:36.175593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:53.138 BaseBdev3 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.138 [2024-11-20 11:20:36.185093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.138 [2024-11-20 11:20:36.187069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:53.138 [2024-11-20 11:20:36.187233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:53.138 [2024-11-20 11:20:36.187490] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:53.138 [2024-11-20 11:20:36.187503] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:53.138 [2024-11-20 11:20:36.187832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:53.138 [2024-11-20 11:20:36.188016] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:53.138 [2024-11-20 11:20:36.188030] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:53.138 [2024-11-20 11:20:36.188237] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.138 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.138 "name": "raid_bdev1", 00:10:53.138 "uuid": "d6f3ef0d-89b7-4c78-9494-9b19adfe810f", 00:10:53.138 "strip_size_kb": 64, 00:10:53.138 "state": "online", 00:10:53.138 "raid_level": "concat", 00:10:53.138 "superblock": true, 00:10:53.138 "num_base_bdevs": 3, 00:10:53.138 "num_base_bdevs_discovered": 3, 00:10:53.138 "num_base_bdevs_operational": 3, 00:10:53.138 "base_bdevs_list": [ 00:10:53.138 { 00:10:53.139 "name": "BaseBdev1", 00:10:53.139 "uuid": "681590ed-dab5-52b7-bd61-3ed53adc4fd8", 00:10:53.139 "is_configured": true, 00:10:53.139 "data_offset": 2048, 00:10:53.139 "data_size": 63488 00:10:53.139 }, 00:10:53.139 { 00:10:53.139 "name": "BaseBdev2", 00:10:53.139 "uuid": "650f3e55-ce73-526e-b759-0f9abfacaa70", 00:10:53.139 "is_configured": true, 00:10:53.139 "data_offset": 2048, 00:10:53.139 "data_size": 63488 00:10:53.139 }, 00:10:53.139 { 00:10:53.139 "name": "BaseBdev3", 00:10:53.139 "uuid": "5c731d14-05cd-5454-be8b-f5b1c79461a4", 00:10:53.139 "is_configured": true, 00:10:53.139 "data_offset": 2048, 00:10:53.139 "data_size": 63488 00:10:53.139 } 00:10:53.139 ] 00:10:53.139 }' 00:10:53.139 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.139 11:20:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.708 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:53.708 11:20:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:53.708 [2024-11-20 11:20:36.725761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:54.646 11:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:54.646 11:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.646 11:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.646 11:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.646 11:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:54.646 11:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:54.646 11:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:54.646 11:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:54.646 11:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:54.646 11:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.646 11:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.646 11:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.646 11:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.646 11:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.646 11:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.646 11:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.646 11:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.646 11:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.646 11:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.646 11:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.646 11:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.646 11:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.646 11:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.646 "name": "raid_bdev1", 00:10:54.646 "uuid": "d6f3ef0d-89b7-4c78-9494-9b19adfe810f", 00:10:54.646 "strip_size_kb": 64, 00:10:54.646 "state": "online", 00:10:54.646 "raid_level": "concat", 00:10:54.646 "superblock": true, 00:10:54.646 "num_base_bdevs": 3, 00:10:54.646 "num_base_bdevs_discovered": 3, 00:10:54.646 "num_base_bdevs_operational": 3, 00:10:54.646 "base_bdevs_list": [ 00:10:54.646 { 00:10:54.646 "name": "BaseBdev1", 00:10:54.646 "uuid": "681590ed-dab5-52b7-bd61-3ed53adc4fd8", 00:10:54.646 "is_configured": true, 00:10:54.646 "data_offset": 2048, 00:10:54.646 "data_size": 63488 00:10:54.646 }, 00:10:54.646 { 00:10:54.646 "name": "BaseBdev2", 00:10:54.646 "uuid": "650f3e55-ce73-526e-b759-0f9abfacaa70", 00:10:54.646 "is_configured": true, 00:10:54.646 "data_offset": 2048, 00:10:54.646 "data_size": 63488 00:10:54.646 }, 00:10:54.646 { 00:10:54.646 "name": "BaseBdev3", 00:10:54.646 "uuid": "5c731d14-05cd-5454-be8b-f5b1c79461a4", 00:10:54.646 "is_configured": true, 00:10:54.646 "data_offset": 2048, 00:10:54.646 "data_size": 63488 00:10:54.646 } 00:10:54.646 ] 00:10:54.646 }' 00:10:54.646 11:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.646 11:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.906 11:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:54.906 11:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.906 11:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.906 [2024-11-20 11:20:37.993591] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:54.906 [2024-11-20 11:20:37.993696] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:54.906 [2024-11-20 11:20:37.996361] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.906 [2024-11-20 11:20:37.996459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.906 [2024-11-20 11:20:37.996520] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:54.906 [2024-11-20 11:20:37.996585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:54.906 { 00:10:54.906 "results": [ 00:10:54.906 { 00:10:54.906 "job": "raid_bdev1", 00:10:54.906 "core_mask": "0x1", 00:10:54.906 "workload": "randrw", 00:10:54.906 "percentage": 50, 00:10:54.906 "status": "finished", 00:10:54.906 "queue_depth": 1, 00:10:54.906 "io_size": 131072, 00:10:54.906 "runtime": 1.268287, 00:10:54.906 "iops": 14897.259058872321, 00:10:54.906 "mibps": 1862.1573823590402, 00:10:54.906 "io_failed": 1, 00:10:54.906 "io_timeout": 0, 00:10:54.906 "avg_latency_us": 93.21234429292656, 00:10:54.906 "min_latency_us": 27.388646288209607, 00:10:54.906 "max_latency_us": 1595.4724890829693 00:10:54.906 } 00:10:54.906 ], 00:10:54.906 "core_count": 1 00:10:54.906 } 00:10:54.906 11:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.906 11:20:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67243 00:10:54.906 11:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67243 ']' 00:10:54.906 11:20:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67243 00:10:54.906 11:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:54.906 11:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.906 11:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67243 00:10:55.166 11:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:55.166 11:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:55.166 killing process with pid 67243 00:10:55.166 11:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67243' 00:10:55.166 11:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67243 00:10:55.166 [2024-11-20 11:20:38.037221] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:55.166 11:20:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67243 00:10:55.166 [2024-11-20 11:20:38.272269] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:56.582 11:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IerWa4HW6F 00:10:56.582 11:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:56.582 11:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:56.582 11:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.79 00:10:56.582 11:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:56.582 11:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:56.582 11:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:56.582 ************************************ 00:10:56.582 END TEST raid_read_error_test 00:10:56.582 ************************************ 00:10:56.582 11:20:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.79 != \0\.\0\0 ]] 00:10:56.582 00:10:56.582 real 0m4.497s 00:10:56.582 user 0m5.272s 00:10:56.582 sys 0m0.569s 00:10:56.582 11:20:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.582 11:20:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.582 11:20:39 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:10:56.582 11:20:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:56.582 11:20:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.582 11:20:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:56.582 ************************************ 00:10:56.583 START TEST raid_write_error_test 00:10:56.583 ************************************ 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5nuH3FlyKq 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67386 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67386 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67386 ']' 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.583 11:20:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.583 [2024-11-20 11:20:39.655876] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:10:56.583 [2024-11-20 11:20:39.656070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67386 ] 00:10:56.842 [2024-11-20 11:20:39.808845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.842 [2024-11-20 11:20:39.925258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.101 [2024-11-20 11:20:40.128224] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.101 [2024-11-20 11:20:40.128383] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.670 BaseBdev1_malloc 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.670 true 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.670 [2024-11-20 11:20:40.590724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:57.670 [2024-11-20 11:20:40.590877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.670 [2024-11-20 11:20:40.590907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:57.670 [2024-11-20 11:20:40.590919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.670 [2024-11-20 11:20:40.593122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.670 [2024-11-20 11:20:40.593165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:57.670 BaseBdev1 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.670 BaseBdev2_malloc 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.670 true 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.670 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.670 [2024-11-20 11:20:40.658057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:57.671 [2024-11-20 11:20:40.658123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.671 [2024-11-20 11:20:40.658141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:57.671 [2024-11-20 11:20:40.658151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.671 [2024-11-20 11:20:40.660463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.671 [2024-11-20 11:20:40.660521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:57.671 BaseBdev2 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.671 BaseBdev3_malloc 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.671 true 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.671 [2024-11-20 11:20:40.736563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:57.671 [2024-11-20 11:20:40.736620] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.671 [2024-11-20 11:20:40.736640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:57.671 [2024-11-20 11:20:40.736651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.671 [2024-11-20 11:20:40.738774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.671 [2024-11-20 11:20:40.738898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:57.671 BaseBdev3 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.671 [2024-11-20 11:20:40.748626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:57.671 [2024-11-20 11:20:40.750513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:57.671 [2024-11-20 11:20:40.750597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:57.671 [2024-11-20 11:20:40.750810] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:57.671 [2024-11-20 11:20:40.750827] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:57.671 [2024-11-20 11:20:40.751125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:57.671 [2024-11-20 11:20:40.751305] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:57.671 [2024-11-20 11:20:40.751319] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:57.671 [2024-11-20 11:20:40.751514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.671 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.931 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.931 "name": "raid_bdev1", 00:10:57.931 "uuid": "7031e71a-c95f-4675-a29e-c257a58755ea", 00:10:57.931 "strip_size_kb": 64, 00:10:57.931 "state": "online", 00:10:57.931 "raid_level": "concat", 00:10:57.931 "superblock": true, 00:10:57.931 "num_base_bdevs": 3, 00:10:57.931 "num_base_bdevs_discovered": 3, 00:10:57.931 "num_base_bdevs_operational": 3, 00:10:57.931 "base_bdevs_list": [ 00:10:57.931 { 00:10:57.931 "name": "BaseBdev1", 00:10:57.931 "uuid": "fcd5855f-1985-5cf4-b03f-cef85dca8db4", 00:10:57.931 "is_configured": true, 00:10:57.931 "data_offset": 2048, 00:10:57.931 "data_size": 63488 00:10:57.931 }, 00:10:57.931 { 00:10:57.931 "name": "BaseBdev2", 00:10:57.931 "uuid": "b1462fd9-3b41-5142-be58-241b469153fc", 00:10:57.931 "is_configured": true, 00:10:57.931 "data_offset": 2048, 00:10:57.931 "data_size": 63488 00:10:57.931 }, 00:10:57.931 { 00:10:57.931 "name": "BaseBdev3", 00:10:57.931 "uuid": "e4183a44-e1e1-53ac-8b23-acf3bac05a06", 00:10:57.931 "is_configured": true, 00:10:57.931 "data_offset": 2048, 00:10:57.931 "data_size": 63488 00:10:57.931 } 00:10:57.931 ] 00:10:57.931 }' 00:10:57.931 11:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.931 11:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.221 11:20:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:58.221 11:20:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:58.221 [2024-11-20 11:20:41.277272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:59.165 11:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:59.165 11:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.165 11:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.165 11:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.165 11:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:59.165 11:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:59.165 11:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:59.165 11:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:59.165 11:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.165 11:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.165 11:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.165 11:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.165 11:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.165 11:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.165 11:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.165 11:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.165 11:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.165 11:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.165 11:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.165 11:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.165 11:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.165 11:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.165 11:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.165 "name": "raid_bdev1", 00:10:59.165 "uuid": "7031e71a-c95f-4675-a29e-c257a58755ea", 00:10:59.165 "strip_size_kb": 64, 00:10:59.165 "state": "online", 00:10:59.165 "raid_level": "concat", 00:10:59.165 "superblock": true, 00:10:59.165 "num_base_bdevs": 3, 00:10:59.165 "num_base_bdevs_discovered": 3, 00:10:59.165 "num_base_bdevs_operational": 3, 00:10:59.165 "base_bdevs_list": [ 00:10:59.165 { 00:10:59.165 "name": "BaseBdev1", 00:10:59.165 "uuid": "fcd5855f-1985-5cf4-b03f-cef85dca8db4", 00:10:59.165 "is_configured": true, 00:10:59.165 "data_offset": 2048, 00:10:59.165 "data_size": 63488 00:10:59.165 }, 00:10:59.165 { 00:10:59.165 "name": "BaseBdev2", 00:10:59.165 "uuid": "b1462fd9-3b41-5142-be58-241b469153fc", 00:10:59.165 "is_configured": true, 00:10:59.165 "data_offset": 2048, 00:10:59.165 "data_size": 63488 00:10:59.165 }, 00:10:59.165 { 00:10:59.165 "name": "BaseBdev3", 00:10:59.165 "uuid": "e4183a44-e1e1-53ac-8b23-acf3bac05a06", 00:10:59.165 "is_configured": true, 00:10:59.165 "data_offset": 2048, 00:10:59.165 "data_size": 63488 00:10:59.165 } 00:10:59.165 ] 00:10:59.165 }' 00:10:59.165 11:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.165 11:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.735 11:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:59.735 11:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.735 11:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.735 [2024-11-20 11:20:42.662555] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:59.735 [2024-11-20 11:20:42.662590] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:59.735 [2024-11-20 11:20:42.665771] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:59.735 [2024-11-20 11:20:42.665857] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.735 [2024-11-20 11:20:42.665933] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:59.735 [2024-11-20 11:20:42.665988] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:59.735 { 00:10:59.735 "results": [ 00:10:59.735 { 00:10:59.735 "job": "raid_bdev1", 00:10:59.735 "core_mask": "0x1", 00:10:59.735 "workload": "randrw", 00:10:59.735 "percentage": 50, 00:10:59.735 "status": "finished", 00:10:59.735 "queue_depth": 1, 00:10:59.735 "io_size": 131072, 00:10:59.735 "runtime": 1.386063, 00:10:59.735 "iops": 13841.362189164562, 00:10:59.735 "mibps": 1730.1702736455702, 00:10:59.735 "io_failed": 1, 00:10:59.735 "io_timeout": 0, 00:10:59.735 "avg_latency_us": 100.23220388593029, 00:10:59.735 "min_latency_us": 27.83580786026201, 00:10:59.735 "max_latency_us": 1824.419213973799 00:10:59.735 } 00:10:59.735 ], 00:10:59.735 "core_count": 1 00:10:59.735 } 00:10:59.735 11:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.735 11:20:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67386 00:10:59.735 11:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67386 ']' 00:10:59.735 11:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67386 00:10:59.735 11:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:59.735 11:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:59.735 11:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67386 00:10:59.735 11:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:59.735 11:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:59.735 11:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67386' 00:10:59.735 killing process with pid 67386 00:10:59.735 11:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67386 00:10:59.735 [2024-11-20 11:20:42.698768] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:59.735 11:20:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67386 00:10:59.994 [2024-11-20 11:20:42.944652] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:01.375 11:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5nuH3FlyKq 00:11:01.375 11:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:01.375 11:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:01.375 11:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:01.375 11:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:01.375 11:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:01.375 11:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:01.375 11:20:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:01.375 00:11:01.375 real 0m4.672s 00:11:01.375 user 0m5.556s 00:11:01.375 sys 0m0.521s 00:11:01.375 11:20:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.375 11:20:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.375 ************************************ 00:11:01.375 END TEST raid_write_error_test 00:11:01.375 ************************************ 00:11:01.375 11:20:44 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:01.375 11:20:44 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:11:01.375 11:20:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:01.375 11:20:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.375 11:20:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:01.375 ************************************ 00:11:01.375 START TEST raid_state_function_test 00:11:01.375 ************************************ 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67530 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:01.375 Process raid pid: 67530 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67530' 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67530 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67530 ']' 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.375 11:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.375 [2024-11-20 11:20:44.383969] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:11:01.375 [2024-11-20 11:20:44.384188] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.635 [2024-11-20 11:20:44.561874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.635 [2024-11-20 11:20:44.691353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.894 [2024-11-20 11:20:44.930188] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.894 [2024-11-20 11:20:44.930313] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.464 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.464 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:02.464 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:02.464 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.464 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.464 [2024-11-20 11:20:45.304874] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:02.464 [2024-11-20 11:20:45.305037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:02.464 [2024-11-20 11:20:45.305075] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:02.464 [2024-11-20 11:20:45.305103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:02.464 [2024-11-20 11:20:45.305113] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:02.464 [2024-11-20 11:20:45.305123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:02.464 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.464 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:02.464 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.464 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.464 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.464 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.464 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:02.464 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.465 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.465 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.465 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.465 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.465 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.465 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.465 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.465 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.465 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.465 "name": "Existed_Raid", 00:11:02.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.465 "strip_size_kb": 0, 00:11:02.465 "state": "configuring", 00:11:02.465 "raid_level": "raid1", 00:11:02.465 "superblock": false, 00:11:02.465 "num_base_bdevs": 3, 00:11:02.465 "num_base_bdevs_discovered": 0, 00:11:02.465 "num_base_bdevs_operational": 3, 00:11:02.465 "base_bdevs_list": [ 00:11:02.465 { 00:11:02.465 "name": "BaseBdev1", 00:11:02.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.465 "is_configured": false, 00:11:02.465 "data_offset": 0, 00:11:02.465 "data_size": 0 00:11:02.465 }, 00:11:02.465 { 00:11:02.465 "name": "BaseBdev2", 00:11:02.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.465 "is_configured": false, 00:11:02.465 "data_offset": 0, 00:11:02.465 "data_size": 0 00:11:02.465 }, 00:11:02.465 { 00:11:02.465 "name": "BaseBdev3", 00:11:02.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.465 "is_configured": false, 00:11:02.465 "data_offset": 0, 00:11:02.465 "data_size": 0 00:11:02.465 } 00:11:02.465 ] 00:11:02.465 }' 00:11:02.465 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.465 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.724 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:02.725 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.725 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.725 [2024-11-20 11:20:45.756074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:02.725 [2024-11-20 11:20:45.756186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:02.725 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.725 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:02.725 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.725 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.725 [2024-11-20 11:20:45.768051] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:02.725 [2024-11-20 11:20:45.768159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:02.725 [2024-11-20 11:20:45.768198] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:02.725 [2024-11-20 11:20:45.768227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:02.725 [2024-11-20 11:20:45.768271] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:02.725 [2024-11-20 11:20:45.768298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:02.725 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.725 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:02.725 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.725 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.725 [2024-11-20 11:20:45.819493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:02.725 BaseBdev1 00:11:02.725 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.725 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:02.725 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:02.725 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:02.725 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:02.725 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:02.725 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:02.725 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:02.725 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.725 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.725 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.725 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:02.725 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.725 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.985 [ 00:11:02.985 { 00:11:02.985 "name": "BaseBdev1", 00:11:02.985 "aliases": [ 00:11:02.985 "fcb6eb3b-c73a-4d68-b2a0-299e051f7608" 00:11:02.985 ], 00:11:02.985 "product_name": "Malloc disk", 00:11:02.985 "block_size": 512, 00:11:02.985 "num_blocks": 65536, 00:11:02.985 "uuid": "fcb6eb3b-c73a-4d68-b2a0-299e051f7608", 00:11:02.985 "assigned_rate_limits": { 00:11:02.985 "rw_ios_per_sec": 0, 00:11:02.985 "rw_mbytes_per_sec": 0, 00:11:02.985 "r_mbytes_per_sec": 0, 00:11:02.985 "w_mbytes_per_sec": 0 00:11:02.985 }, 00:11:02.985 "claimed": true, 00:11:02.985 "claim_type": "exclusive_write", 00:11:02.985 "zoned": false, 00:11:02.985 "supported_io_types": { 00:11:02.985 "read": true, 00:11:02.985 "write": true, 00:11:02.985 "unmap": true, 00:11:02.985 "flush": true, 00:11:02.985 "reset": true, 00:11:02.985 "nvme_admin": false, 00:11:02.985 "nvme_io": false, 00:11:02.985 "nvme_io_md": false, 00:11:02.985 "write_zeroes": true, 00:11:02.985 "zcopy": true, 00:11:02.985 "get_zone_info": false, 00:11:02.985 "zone_management": false, 00:11:02.985 "zone_append": false, 00:11:02.985 "compare": false, 00:11:02.985 "compare_and_write": false, 00:11:02.985 "abort": true, 00:11:02.985 "seek_hole": false, 00:11:02.985 "seek_data": false, 00:11:02.985 "copy": true, 00:11:02.985 "nvme_iov_md": false 00:11:02.985 }, 00:11:02.985 "memory_domains": [ 00:11:02.985 { 00:11:02.985 "dma_device_id": "system", 00:11:02.985 "dma_device_type": 1 00:11:02.985 }, 00:11:02.985 { 00:11:02.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.985 "dma_device_type": 2 00:11:02.985 } 00:11:02.985 ], 00:11:02.985 "driver_specific": {} 00:11:02.985 } 00:11:02.985 ] 00:11:02.985 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.985 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:02.985 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:02.985 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.985 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.985 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.985 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.985 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:02.985 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.985 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.985 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.985 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.985 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.985 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.985 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.985 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.985 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.985 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.985 "name": "Existed_Raid", 00:11:02.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.985 "strip_size_kb": 0, 00:11:02.985 "state": "configuring", 00:11:02.985 "raid_level": "raid1", 00:11:02.985 "superblock": false, 00:11:02.985 "num_base_bdevs": 3, 00:11:02.985 "num_base_bdevs_discovered": 1, 00:11:02.985 "num_base_bdevs_operational": 3, 00:11:02.985 "base_bdevs_list": [ 00:11:02.985 { 00:11:02.985 "name": "BaseBdev1", 00:11:02.985 "uuid": "fcb6eb3b-c73a-4d68-b2a0-299e051f7608", 00:11:02.985 "is_configured": true, 00:11:02.985 "data_offset": 0, 00:11:02.985 "data_size": 65536 00:11:02.985 }, 00:11:02.985 { 00:11:02.985 "name": "BaseBdev2", 00:11:02.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.985 "is_configured": false, 00:11:02.985 "data_offset": 0, 00:11:02.985 "data_size": 0 00:11:02.985 }, 00:11:02.985 { 00:11:02.985 "name": "BaseBdev3", 00:11:02.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.985 "is_configured": false, 00:11:02.985 "data_offset": 0, 00:11:02.985 "data_size": 0 00:11:02.985 } 00:11:02.985 ] 00:11:02.985 }' 00:11:02.985 11:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.985 11:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.301 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:03.301 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.301 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.301 [2024-11-20 11:20:46.306720] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:03.301 [2024-11-20 11:20:46.306788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:03.301 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.301 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:03.301 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.301 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.301 [2024-11-20 11:20:46.318806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.301 [2024-11-20 11:20:46.320907] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:03.301 [2024-11-20 11:20:46.320960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:03.301 [2024-11-20 11:20:46.320971] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:03.301 [2024-11-20 11:20:46.320982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:03.301 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.301 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:03.301 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:03.301 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:03.301 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.301 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.301 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.301 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.301 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:03.302 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.302 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.302 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.302 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.302 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.302 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.302 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.302 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.302 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.302 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.302 "name": "Existed_Raid", 00:11:03.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.302 "strip_size_kb": 0, 00:11:03.302 "state": "configuring", 00:11:03.302 "raid_level": "raid1", 00:11:03.302 "superblock": false, 00:11:03.302 "num_base_bdevs": 3, 00:11:03.302 "num_base_bdevs_discovered": 1, 00:11:03.302 "num_base_bdevs_operational": 3, 00:11:03.302 "base_bdevs_list": [ 00:11:03.302 { 00:11:03.302 "name": "BaseBdev1", 00:11:03.302 "uuid": "fcb6eb3b-c73a-4d68-b2a0-299e051f7608", 00:11:03.302 "is_configured": true, 00:11:03.302 "data_offset": 0, 00:11:03.302 "data_size": 65536 00:11:03.302 }, 00:11:03.302 { 00:11:03.302 "name": "BaseBdev2", 00:11:03.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.302 "is_configured": false, 00:11:03.302 "data_offset": 0, 00:11:03.302 "data_size": 0 00:11:03.302 }, 00:11:03.302 { 00:11:03.302 "name": "BaseBdev3", 00:11:03.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.302 "is_configured": false, 00:11:03.302 "data_offset": 0, 00:11:03.302 "data_size": 0 00:11:03.302 } 00:11:03.302 ] 00:11:03.302 }' 00:11:03.302 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.302 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.872 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:03.872 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.872 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.872 [2024-11-20 11:20:46.844769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:03.872 BaseBdev2 00:11:03.872 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.872 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:03.872 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:03.872 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:03.872 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:03.872 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:03.872 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:03.872 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:03.872 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.872 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.872 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.872 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:03.872 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.872 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.872 [ 00:11:03.872 { 00:11:03.872 "name": "BaseBdev2", 00:11:03.872 "aliases": [ 00:11:03.872 "4018a642-8ee6-4300-a16d-4122e7a087b1" 00:11:03.872 ], 00:11:03.872 "product_name": "Malloc disk", 00:11:03.872 "block_size": 512, 00:11:03.872 "num_blocks": 65536, 00:11:03.872 "uuid": "4018a642-8ee6-4300-a16d-4122e7a087b1", 00:11:03.872 "assigned_rate_limits": { 00:11:03.872 "rw_ios_per_sec": 0, 00:11:03.872 "rw_mbytes_per_sec": 0, 00:11:03.872 "r_mbytes_per_sec": 0, 00:11:03.872 "w_mbytes_per_sec": 0 00:11:03.872 }, 00:11:03.872 "claimed": true, 00:11:03.872 "claim_type": "exclusive_write", 00:11:03.872 "zoned": false, 00:11:03.872 "supported_io_types": { 00:11:03.872 "read": true, 00:11:03.872 "write": true, 00:11:03.872 "unmap": true, 00:11:03.872 "flush": true, 00:11:03.872 "reset": true, 00:11:03.872 "nvme_admin": false, 00:11:03.872 "nvme_io": false, 00:11:03.872 "nvme_io_md": false, 00:11:03.872 "write_zeroes": true, 00:11:03.872 "zcopy": true, 00:11:03.872 "get_zone_info": false, 00:11:03.872 "zone_management": false, 00:11:03.872 "zone_append": false, 00:11:03.872 "compare": false, 00:11:03.872 "compare_and_write": false, 00:11:03.872 "abort": true, 00:11:03.872 "seek_hole": false, 00:11:03.872 "seek_data": false, 00:11:03.872 "copy": true, 00:11:03.872 "nvme_iov_md": false 00:11:03.872 }, 00:11:03.873 "memory_domains": [ 00:11:03.873 { 00:11:03.873 "dma_device_id": "system", 00:11:03.873 "dma_device_type": 1 00:11:03.873 }, 00:11:03.873 { 00:11:03.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.873 "dma_device_type": 2 00:11:03.873 } 00:11:03.873 ], 00:11:03.873 "driver_specific": {} 00:11:03.873 } 00:11:03.873 ] 00:11:03.873 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.873 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:03.873 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:03.873 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:03.873 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:03.873 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.873 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.873 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.873 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.873 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:03.873 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.873 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.873 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.873 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.873 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.873 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.873 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.873 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.873 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.873 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.873 "name": "Existed_Raid", 00:11:03.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.873 "strip_size_kb": 0, 00:11:03.873 "state": "configuring", 00:11:03.873 "raid_level": "raid1", 00:11:03.873 "superblock": false, 00:11:03.873 "num_base_bdevs": 3, 00:11:03.873 "num_base_bdevs_discovered": 2, 00:11:03.873 "num_base_bdevs_operational": 3, 00:11:03.873 "base_bdevs_list": [ 00:11:03.873 { 00:11:03.873 "name": "BaseBdev1", 00:11:03.873 "uuid": "fcb6eb3b-c73a-4d68-b2a0-299e051f7608", 00:11:03.873 "is_configured": true, 00:11:03.873 "data_offset": 0, 00:11:03.873 "data_size": 65536 00:11:03.873 }, 00:11:03.873 { 00:11:03.873 "name": "BaseBdev2", 00:11:03.873 "uuid": "4018a642-8ee6-4300-a16d-4122e7a087b1", 00:11:03.873 "is_configured": true, 00:11:03.873 "data_offset": 0, 00:11:03.873 "data_size": 65536 00:11:03.873 }, 00:11:03.873 { 00:11:03.873 "name": "BaseBdev3", 00:11:03.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.873 "is_configured": false, 00:11:03.873 "data_offset": 0, 00:11:03.873 "data_size": 0 00:11:03.873 } 00:11:03.873 ] 00:11:03.873 }' 00:11:03.873 11:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.873 11:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.442 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:04.442 11:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.442 11:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.442 [2024-11-20 11:20:47.398008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:04.442 [2024-11-20 11:20:47.398067] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:04.442 [2024-11-20 11:20:47.398210] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:04.442 [2024-11-20 11:20:47.398540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:04.442 [2024-11-20 11:20:47.398748] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:04.442 [2024-11-20 11:20:47.398760] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:04.442 [2024-11-20 11:20:47.399036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.442 BaseBdev3 00:11:04.442 11:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.442 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:04.442 11:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:04.442 11:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.442 11:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:04.442 11:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.442 11:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.442 11:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:04.442 11:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.442 11:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.442 11:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.442 11:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:04.442 11:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.442 11:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.442 [ 00:11:04.442 { 00:11:04.442 "name": "BaseBdev3", 00:11:04.442 "aliases": [ 00:11:04.442 "c2bee0d4-f57f-4638-a5fb-299136d10ac9" 00:11:04.442 ], 00:11:04.442 "product_name": "Malloc disk", 00:11:04.442 "block_size": 512, 00:11:04.442 "num_blocks": 65536, 00:11:04.442 "uuid": "c2bee0d4-f57f-4638-a5fb-299136d10ac9", 00:11:04.442 "assigned_rate_limits": { 00:11:04.442 "rw_ios_per_sec": 0, 00:11:04.442 "rw_mbytes_per_sec": 0, 00:11:04.442 "r_mbytes_per_sec": 0, 00:11:04.442 "w_mbytes_per_sec": 0 00:11:04.442 }, 00:11:04.442 "claimed": true, 00:11:04.442 "claim_type": "exclusive_write", 00:11:04.442 "zoned": false, 00:11:04.442 "supported_io_types": { 00:11:04.442 "read": true, 00:11:04.442 "write": true, 00:11:04.442 "unmap": true, 00:11:04.442 "flush": true, 00:11:04.442 "reset": true, 00:11:04.442 "nvme_admin": false, 00:11:04.442 "nvme_io": false, 00:11:04.442 "nvme_io_md": false, 00:11:04.442 "write_zeroes": true, 00:11:04.442 "zcopy": true, 00:11:04.442 "get_zone_info": false, 00:11:04.442 "zone_management": false, 00:11:04.442 "zone_append": false, 00:11:04.442 "compare": false, 00:11:04.442 "compare_and_write": false, 00:11:04.442 "abort": true, 00:11:04.442 "seek_hole": false, 00:11:04.442 "seek_data": false, 00:11:04.442 "copy": true, 00:11:04.442 "nvme_iov_md": false 00:11:04.442 }, 00:11:04.442 "memory_domains": [ 00:11:04.442 { 00:11:04.442 "dma_device_id": "system", 00:11:04.442 "dma_device_type": 1 00:11:04.442 }, 00:11:04.442 { 00:11:04.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.442 "dma_device_type": 2 00:11:04.442 } 00:11:04.442 ], 00:11:04.442 "driver_specific": {} 00:11:04.442 } 00:11:04.442 ] 00:11:04.442 11:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.443 11:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:04.443 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:04.443 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:04.443 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:04.443 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.443 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.443 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.443 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.443 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:04.443 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.443 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.443 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.443 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.443 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.443 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.443 11:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.443 11:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.443 11:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.443 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.443 "name": "Existed_Raid", 00:11:04.443 "uuid": "5e4b4644-90e6-4be9-bd01-acf2801f7a1e", 00:11:04.443 "strip_size_kb": 0, 00:11:04.443 "state": "online", 00:11:04.443 "raid_level": "raid1", 00:11:04.443 "superblock": false, 00:11:04.443 "num_base_bdevs": 3, 00:11:04.443 "num_base_bdevs_discovered": 3, 00:11:04.443 "num_base_bdevs_operational": 3, 00:11:04.443 "base_bdevs_list": [ 00:11:04.443 { 00:11:04.443 "name": "BaseBdev1", 00:11:04.443 "uuid": "fcb6eb3b-c73a-4d68-b2a0-299e051f7608", 00:11:04.443 "is_configured": true, 00:11:04.443 "data_offset": 0, 00:11:04.443 "data_size": 65536 00:11:04.443 }, 00:11:04.443 { 00:11:04.443 "name": "BaseBdev2", 00:11:04.443 "uuid": "4018a642-8ee6-4300-a16d-4122e7a087b1", 00:11:04.443 "is_configured": true, 00:11:04.443 "data_offset": 0, 00:11:04.443 "data_size": 65536 00:11:04.443 }, 00:11:04.443 { 00:11:04.443 "name": "BaseBdev3", 00:11:04.443 "uuid": "c2bee0d4-f57f-4638-a5fb-299136d10ac9", 00:11:04.443 "is_configured": true, 00:11:04.443 "data_offset": 0, 00:11:04.443 "data_size": 65536 00:11:04.443 } 00:11:04.443 ] 00:11:04.443 }' 00:11:04.443 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.443 11:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.012 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:05.012 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:05.012 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:05.012 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:05.012 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:05.012 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:05.012 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:05.012 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:05.012 11:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.012 11:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.012 [2024-11-20 11:20:47.937517] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.012 11:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.012 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:05.012 "name": "Existed_Raid", 00:11:05.012 "aliases": [ 00:11:05.012 "5e4b4644-90e6-4be9-bd01-acf2801f7a1e" 00:11:05.012 ], 00:11:05.012 "product_name": "Raid Volume", 00:11:05.012 "block_size": 512, 00:11:05.012 "num_blocks": 65536, 00:11:05.012 "uuid": "5e4b4644-90e6-4be9-bd01-acf2801f7a1e", 00:11:05.012 "assigned_rate_limits": { 00:11:05.012 "rw_ios_per_sec": 0, 00:11:05.012 "rw_mbytes_per_sec": 0, 00:11:05.012 "r_mbytes_per_sec": 0, 00:11:05.012 "w_mbytes_per_sec": 0 00:11:05.012 }, 00:11:05.012 "claimed": false, 00:11:05.012 "zoned": false, 00:11:05.012 "supported_io_types": { 00:11:05.012 "read": true, 00:11:05.012 "write": true, 00:11:05.012 "unmap": false, 00:11:05.012 "flush": false, 00:11:05.012 "reset": true, 00:11:05.012 "nvme_admin": false, 00:11:05.012 "nvme_io": false, 00:11:05.012 "nvme_io_md": false, 00:11:05.012 "write_zeroes": true, 00:11:05.012 "zcopy": false, 00:11:05.012 "get_zone_info": false, 00:11:05.012 "zone_management": false, 00:11:05.012 "zone_append": false, 00:11:05.012 "compare": false, 00:11:05.012 "compare_and_write": false, 00:11:05.012 "abort": false, 00:11:05.012 "seek_hole": false, 00:11:05.012 "seek_data": false, 00:11:05.012 "copy": false, 00:11:05.012 "nvme_iov_md": false 00:11:05.012 }, 00:11:05.012 "memory_domains": [ 00:11:05.012 { 00:11:05.012 "dma_device_id": "system", 00:11:05.012 "dma_device_type": 1 00:11:05.012 }, 00:11:05.012 { 00:11:05.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.012 "dma_device_type": 2 00:11:05.012 }, 00:11:05.012 { 00:11:05.012 "dma_device_id": "system", 00:11:05.012 "dma_device_type": 1 00:11:05.012 }, 00:11:05.012 { 00:11:05.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.012 "dma_device_type": 2 00:11:05.012 }, 00:11:05.012 { 00:11:05.012 "dma_device_id": "system", 00:11:05.012 "dma_device_type": 1 00:11:05.012 }, 00:11:05.012 { 00:11:05.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.012 "dma_device_type": 2 00:11:05.012 } 00:11:05.012 ], 00:11:05.012 "driver_specific": { 00:11:05.012 "raid": { 00:11:05.012 "uuid": "5e4b4644-90e6-4be9-bd01-acf2801f7a1e", 00:11:05.012 "strip_size_kb": 0, 00:11:05.012 "state": "online", 00:11:05.012 "raid_level": "raid1", 00:11:05.012 "superblock": false, 00:11:05.012 "num_base_bdevs": 3, 00:11:05.012 "num_base_bdevs_discovered": 3, 00:11:05.012 "num_base_bdevs_operational": 3, 00:11:05.012 "base_bdevs_list": [ 00:11:05.012 { 00:11:05.012 "name": "BaseBdev1", 00:11:05.012 "uuid": "fcb6eb3b-c73a-4d68-b2a0-299e051f7608", 00:11:05.012 "is_configured": true, 00:11:05.012 "data_offset": 0, 00:11:05.012 "data_size": 65536 00:11:05.012 }, 00:11:05.012 { 00:11:05.012 "name": "BaseBdev2", 00:11:05.012 "uuid": "4018a642-8ee6-4300-a16d-4122e7a087b1", 00:11:05.012 "is_configured": true, 00:11:05.012 "data_offset": 0, 00:11:05.012 "data_size": 65536 00:11:05.012 }, 00:11:05.012 { 00:11:05.012 "name": "BaseBdev3", 00:11:05.012 "uuid": "c2bee0d4-f57f-4638-a5fb-299136d10ac9", 00:11:05.012 "is_configured": true, 00:11:05.012 "data_offset": 0, 00:11:05.012 "data_size": 65536 00:11:05.012 } 00:11:05.012 ] 00:11:05.012 } 00:11:05.012 } 00:11:05.012 }' 00:11:05.012 11:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:05.012 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:05.012 BaseBdev2 00:11:05.012 BaseBdev3' 00:11:05.012 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.012 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:05.012 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.012 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:05.012 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.012 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.012 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.012 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.272 [2024-11-20 11:20:48.232741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.272 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.532 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.532 "name": "Existed_Raid", 00:11:05.532 "uuid": "5e4b4644-90e6-4be9-bd01-acf2801f7a1e", 00:11:05.532 "strip_size_kb": 0, 00:11:05.532 "state": "online", 00:11:05.532 "raid_level": "raid1", 00:11:05.532 "superblock": false, 00:11:05.532 "num_base_bdevs": 3, 00:11:05.532 "num_base_bdevs_discovered": 2, 00:11:05.532 "num_base_bdevs_operational": 2, 00:11:05.532 "base_bdevs_list": [ 00:11:05.532 { 00:11:05.532 "name": null, 00:11:05.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.532 "is_configured": false, 00:11:05.532 "data_offset": 0, 00:11:05.532 "data_size": 65536 00:11:05.532 }, 00:11:05.532 { 00:11:05.532 "name": "BaseBdev2", 00:11:05.532 "uuid": "4018a642-8ee6-4300-a16d-4122e7a087b1", 00:11:05.532 "is_configured": true, 00:11:05.532 "data_offset": 0, 00:11:05.532 "data_size": 65536 00:11:05.532 }, 00:11:05.532 { 00:11:05.532 "name": "BaseBdev3", 00:11:05.532 "uuid": "c2bee0d4-f57f-4638-a5fb-299136d10ac9", 00:11:05.532 "is_configured": true, 00:11:05.532 "data_offset": 0, 00:11:05.532 "data_size": 65536 00:11:05.532 } 00:11:05.532 ] 00:11:05.532 }' 00:11:05.532 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.532 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.792 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:05.792 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:05.792 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.792 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.792 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.792 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:05.792 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.792 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:05.792 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:05.792 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:05.792 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.792 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.792 [2024-11-20 11:20:48.837867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:06.051 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.051 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:06.051 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.051 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.051 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:06.051 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.051 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.051 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.051 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:06.051 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:06.051 11:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:06.051 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.051 11:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.051 [2024-11-20 11:20:48.992599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:06.051 [2024-11-20 11:20:48.992773] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.051 [2024-11-20 11:20:49.088013] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.051 [2024-11-20 11:20:49.088166] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.051 [2024-11-20 11:20:49.088210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:06.051 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.051 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:06.051 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.051 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.051 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:06.051 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.051 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.051 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.051 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:06.051 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:06.051 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:06.051 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:06.051 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.051 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:06.052 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.052 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.310 BaseBdev2 00:11:06.310 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.310 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:06.310 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:06.310 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.310 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:06.310 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.310 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.310 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.310 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.310 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.311 [ 00:11:06.311 { 00:11:06.311 "name": "BaseBdev2", 00:11:06.311 "aliases": [ 00:11:06.311 "5d22dbc3-03ee-47b2-9b68-0ebcca8c7fa5" 00:11:06.311 ], 00:11:06.311 "product_name": "Malloc disk", 00:11:06.311 "block_size": 512, 00:11:06.311 "num_blocks": 65536, 00:11:06.311 "uuid": "5d22dbc3-03ee-47b2-9b68-0ebcca8c7fa5", 00:11:06.311 "assigned_rate_limits": { 00:11:06.311 "rw_ios_per_sec": 0, 00:11:06.311 "rw_mbytes_per_sec": 0, 00:11:06.311 "r_mbytes_per_sec": 0, 00:11:06.311 "w_mbytes_per_sec": 0 00:11:06.311 }, 00:11:06.311 "claimed": false, 00:11:06.311 "zoned": false, 00:11:06.311 "supported_io_types": { 00:11:06.311 "read": true, 00:11:06.311 "write": true, 00:11:06.311 "unmap": true, 00:11:06.311 "flush": true, 00:11:06.311 "reset": true, 00:11:06.311 "nvme_admin": false, 00:11:06.311 "nvme_io": false, 00:11:06.311 "nvme_io_md": false, 00:11:06.311 "write_zeroes": true, 00:11:06.311 "zcopy": true, 00:11:06.311 "get_zone_info": false, 00:11:06.311 "zone_management": false, 00:11:06.311 "zone_append": false, 00:11:06.311 "compare": false, 00:11:06.311 "compare_and_write": false, 00:11:06.311 "abort": true, 00:11:06.311 "seek_hole": false, 00:11:06.311 "seek_data": false, 00:11:06.311 "copy": true, 00:11:06.311 "nvme_iov_md": false 00:11:06.311 }, 00:11:06.311 "memory_domains": [ 00:11:06.311 { 00:11:06.311 "dma_device_id": "system", 00:11:06.311 "dma_device_type": 1 00:11:06.311 }, 00:11:06.311 { 00:11:06.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.311 "dma_device_type": 2 00:11:06.311 } 00:11:06.311 ], 00:11:06.311 "driver_specific": {} 00:11:06.311 } 00:11:06.311 ] 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.311 BaseBdev3 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.311 [ 00:11:06.311 { 00:11:06.311 "name": "BaseBdev3", 00:11:06.311 "aliases": [ 00:11:06.311 "d6301dbd-672f-4d57-baea-bf63ae06960b" 00:11:06.311 ], 00:11:06.311 "product_name": "Malloc disk", 00:11:06.311 "block_size": 512, 00:11:06.311 "num_blocks": 65536, 00:11:06.311 "uuid": "d6301dbd-672f-4d57-baea-bf63ae06960b", 00:11:06.311 "assigned_rate_limits": { 00:11:06.311 "rw_ios_per_sec": 0, 00:11:06.311 "rw_mbytes_per_sec": 0, 00:11:06.311 "r_mbytes_per_sec": 0, 00:11:06.311 "w_mbytes_per_sec": 0 00:11:06.311 }, 00:11:06.311 "claimed": false, 00:11:06.311 "zoned": false, 00:11:06.311 "supported_io_types": { 00:11:06.311 "read": true, 00:11:06.311 "write": true, 00:11:06.311 "unmap": true, 00:11:06.311 "flush": true, 00:11:06.311 "reset": true, 00:11:06.311 "nvme_admin": false, 00:11:06.311 "nvme_io": false, 00:11:06.311 "nvme_io_md": false, 00:11:06.311 "write_zeroes": true, 00:11:06.311 "zcopy": true, 00:11:06.311 "get_zone_info": false, 00:11:06.311 "zone_management": false, 00:11:06.311 "zone_append": false, 00:11:06.311 "compare": false, 00:11:06.311 "compare_and_write": false, 00:11:06.311 "abort": true, 00:11:06.311 "seek_hole": false, 00:11:06.311 "seek_data": false, 00:11:06.311 "copy": true, 00:11:06.311 "nvme_iov_md": false 00:11:06.311 }, 00:11:06.311 "memory_domains": [ 00:11:06.311 { 00:11:06.311 "dma_device_id": "system", 00:11:06.311 "dma_device_type": 1 00:11:06.311 }, 00:11:06.311 { 00:11:06.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.311 "dma_device_type": 2 00:11:06.311 } 00:11:06.311 ], 00:11:06.311 "driver_specific": {} 00:11:06.311 } 00:11:06.311 ] 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.311 [2024-11-20 11:20:49.311714] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:06.311 [2024-11-20 11:20:49.311816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:06.311 [2024-11-20 11:20:49.311863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:06.311 [2024-11-20 11:20:49.313899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.311 "name": "Existed_Raid", 00:11:06.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.311 "strip_size_kb": 0, 00:11:06.311 "state": "configuring", 00:11:06.311 "raid_level": "raid1", 00:11:06.311 "superblock": false, 00:11:06.311 "num_base_bdevs": 3, 00:11:06.311 "num_base_bdevs_discovered": 2, 00:11:06.311 "num_base_bdevs_operational": 3, 00:11:06.311 "base_bdevs_list": [ 00:11:06.311 { 00:11:06.311 "name": "BaseBdev1", 00:11:06.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.311 "is_configured": false, 00:11:06.311 "data_offset": 0, 00:11:06.311 "data_size": 0 00:11:06.311 }, 00:11:06.311 { 00:11:06.311 "name": "BaseBdev2", 00:11:06.311 "uuid": "5d22dbc3-03ee-47b2-9b68-0ebcca8c7fa5", 00:11:06.311 "is_configured": true, 00:11:06.311 "data_offset": 0, 00:11:06.311 "data_size": 65536 00:11:06.311 }, 00:11:06.311 { 00:11:06.311 "name": "BaseBdev3", 00:11:06.311 "uuid": "d6301dbd-672f-4d57-baea-bf63ae06960b", 00:11:06.311 "is_configured": true, 00:11:06.311 "data_offset": 0, 00:11:06.311 "data_size": 65536 00:11:06.311 } 00:11:06.311 ] 00:11:06.311 }' 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.311 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.880 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:06.880 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.880 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.880 [2024-11-20 11:20:49.794919] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:06.880 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.880 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:06.880 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.880 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.880 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.880 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.880 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:06.880 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.880 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.880 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.880 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.880 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.880 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.880 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.880 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.880 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.880 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.880 "name": "Existed_Raid", 00:11:06.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.880 "strip_size_kb": 0, 00:11:06.880 "state": "configuring", 00:11:06.880 "raid_level": "raid1", 00:11:06.880 "superblock": false, 00:11:06.880 "num_base_bdevs": 3, 00:11:06.880 "num_base_bdevs_discovered": 1, 00:11:06.880 "num_base_bdevs_operational": 3, 00:11:06.880 "base_bdevs_list": [ 00:11:06.880 { 00:11:06.880 "name": "BaseBdev1", 00:11:06.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.880 "is_configured": false, 00:11:06.880 "data_offset": 0, 00:11:06.880 "data_size": 0 00:11:06.880 }, 00:11:06.880 { 00:11:06.880 "name": null, 00:11:06.880 "uuid": "5d22dbc3-03ee-47b2-9b68-0ebcca8c7fa5", 00:11:06.880 "is_configured": false, 00:11:06.880 "data_offset": 0, 00:11:06.880 "data_size": 65536 00:11:06.880 }, 00:11:06.880 { 00:11:06.880 "name": "BaseBdev3", 00:11:06.880 "uuid": "d6301dbd-672f-4d57-baea-bf63ae06960b", 00:11:06.880 "is_configured": true, 00:11:06.880 "data_offset": 0, 00:11:06.880 "data_size": 65536 00:11:06.880 } 00:11:06.880 ] 00:11:06.880 }' 00:11:06.880 11:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.880 11:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.138 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.138 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:07.138 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.138 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.138 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.138 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:07.138 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:07.138 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.138 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.398 [2024-11-20 11:20:50.266969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:07.398 BaseBdev1 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.398 [ 00:11:07.398 { 00:11:07.398 "name": "BaseBdev1", 00:11:07.398 "aliases": [ 00:11:07.398 "62cc6d09-b02e-45e0-8c46-18d90dd07a1e" 00:11:07.398 ], 00:11:07.398 "product_name": "Malloc disk", 00:11:07.398 "block_size": 512, 00:11:07.398 "num_blocks": 65536, 00:11:07.398 "uuid": "62cc6d09-b02e-45e0-8c46-18d90dd07a1e", 00:11:07.398 "assigned_rate_limits": { 00:11:07.398 "rw_ios_per_sec": 0, 00:11:07.398 "rw_mbytes_per_sec": 0, 00:11:07.398 "r_mbytes_per_sec": 0, 00:11:07.398 "w_mbytes_per_sec": 0 00:11:07.398 }, 00:11:07.398 "claimed": true, 00:11:07.398 "claim_type": "exclusive_write", 00:11:07.398 "zoned": false, 00:11:07.398 "supported_io_types": { 00:11:07.398 "read": true, 00:11:07.398 "write": true, 00:11:07.398 "unmap": true, 00:11:07.398 "flush": true, 00:11:07.398 "reset": true, 00:11:07.398 "nvme_admin": false, 00:11:07.398 "nvme_io": false, 00:11:07.398 "nvme_io_md": false, 00:11:07.398 "write_zeroes": true, 00:11:07.398 "zcopy": true, 00:11:07.398 "get_zone_info": false, 00:11:07.398 "zone_management": false, 00:11:07.398 "zone_append": false, 00:11:07.398 "compare": false, 00:11:07.398 "compare_and_write": false, 00:11:07.398 "abort": true, 00:11:07.398 "seek_hole": false, 00:11:07.398 "seek_data": false, 00:11:07.398 "copy": true, 00:11:07.398 "nvme_iov_md": false 00:11:07.398 }, 00:11:07.398 "memory_domains": [ 00:11:07.398 { 00:11:07.398 "dma_device_id": "system", 00:11:07.398 "dma_device_type": 1 00:11:07.398 }, 00:11:07.398 { 00:11:07.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.398 "dma_device_type": 2 00:11:07.398 } 00:11:07.398 ], 00:11:07.398 "driver_specific": {} 00:11:07.398 } 00:11:07.398 ] 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.398 "name": "Existed_Raid", 00:11:07.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.398 "strip_size_kb": 0, 00:11:07.398 "state": "configuring", 00:11:07.398 "raid_level": "raid1", 00:11:07.398 "superblock": false, 00:11:07.398 "num_base_bdevs": 3, 00:11:07.398 "num_base_bdevs_discovered": 2, 00:11:07.398 "num_base_bdevs_operational": 3, 00:11:07.398 "base_bdevs_list": [ 00:11:07.398 { 00:11:07.398 "name": "BaseBdev1", 00:11:07.398 "uuid": "62cc6d09-b02e-45e0-8c46-18d90dd07a1e", 00:11:07.398 "is_configured": true, 00:11:07.398 "data_offset": 0, 00:11:07.398 "data_size": 65536 00:11:07.398 }, 00:11:07.398 { 00:11:07.398 "name": null, 00:11:07.398 "uuid": "5d22dbc3-03ee-47b2-9b68-0ebcca8c7fa5", 00:11:07.398 "is_configured": false, 00:11:07.398 "data_offset": 0, 00:11:07.398 "data_size": 65536 00:11:07.398 }, 00:11:07.398 { 00:11:07.398 "name": "BaseBdev3", 00:11:07.398 "uuid": "d6301dbd-672f-4d57-baea-bf63ae06960b", 00:11:07.398 "is_configured": true, 00:11:07.398 "data_offset": 0, 00:11:07.398 "data_size": 65536 00:11:07.398 } 00:11:07.398 ] 00:11:07.398 }' 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.398 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.658 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.658 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:07.658 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.658 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.658 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.918 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:07.918 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:07.918 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.918 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.918 [2024-11-20 11:20:50.794151] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:07.918 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.918 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:07.918 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.918 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.918 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.918 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.918 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:07.918 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.918 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.918 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.918 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.918 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.918 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.918 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.918 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.918 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.918 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.918 "name": "Existed_Raid", 00:11:07.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.918 "strip_size_kb": 0, 00:11:07.918 "state": "configuring", 00:11:07.918 "raid_level": "raid1", 00:11:07.918 "superblock": false, 00:11:07.918 "num_base_bdevs": 3, 00:11:07.918 "num_base_bdevs_discovered": 1, 00:11:07.918 "num_base_bdevs_operational": 3, 00:11:07.918 "base_bdevs_list": [ 00:11:07.918 { 00:11:07.918 "name": "BaseBdev1", 00:11:07.918 "uuid": "62cc6d09-b02e-45e0-8c46-18d90dd07a1e", 00:11:07.918 "is_configured": true, 00:11:07.918 "data_offset": 0, 00:11:07.918 "data_size": 65536 00:11:07.918 }, 00:11:07.918 { 00:11:07.918 "name": null, 00:11:07.918 "uuid": "5d22dbc3-03ee-47b2-9b68-0ebcca8c7fa5", 00:11:07.918 "is_configured": false, 00:11:07.918 "data_offset": 0, 00:11:07.918 "data_size": 65536 00:11:07.918 }, 00:11:07.918 { 00:11:07.918 "name": null, 00:11:07.918 "uuid": "d6301dbd-672f-4d57-baea-bf63ae06960b", 00:11:07.918 "is_configured": false, 00:11:07.918 "data_offset": 0, 00:11:07.918 "data_size": 65536 00:11:07.918 } 00:11:07.918 ] 00:11:07.918 }' 00:11:07.918 11:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.918 11:20:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.178 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:08.178 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.178 11:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.178 11:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.178 11:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.178 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:08.178 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:08.178 11:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.178 11:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.178 [2024-11-20 11:20:51.265379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:08.178 11:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.179 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:08.179 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.179 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.179 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.179 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.179 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:08.179 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.179 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.179 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.179 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.179 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.179 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.179 11:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.179 11:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.439 11:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.439 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.439 "name": "Existed_Raid", 00:11:08.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.439 "strip_size_kb": 0, 00:11:08.439 "state": "configuring", 00:11:08.439 "raid_level": "raid1", 00:11:08.439 "superblock": false, 00:11:08.439 "num_base_bdevs": 3, 00:11:08.439 "num_base_bdevs_discovered": 2, 00:11:08.439 "num_base_bdevs_operational": 3, 00:11:08.439 "base_bdevs_list": [ 00:11:08.439 { 00:11:08.439 "name": "BaseBdev1", 00:11:08.439 "uuid": "62cc6d09-b02e-45e0-8c46-18d90dd07a1e", 00:11:08.439 "is_configured": true, 00:11:08.439 "data_offset": 0, 00:11:08.439 "data_size": 65536 00:11:08.439 }, 00:11:08.440 { 00:11:08.440 "name": null, 00:11:08.440 "uuid": "5d22dbc3-03ee-47b2-9b68-0ebcca8c7fa5", 00:11:08.440 "is_configured": false, 00:11:08.440 "data_offset": 0, 00:11:08.440 "data_size": 65536 00:11:08.440 }, 00:11:08.440 { 00:11:08.440 "name": "BaseBdev3", 00:11:08.440 "uuid": "d6301dbd-672f-4d57-baea-bf63ae06960b", 00:11:08.440 "is_configured": true, 00:11:08.440 "data_offset": 0, 00:11:08.440 "data_size": 65536 00:11:08.440 } 00:11:08.440 ] 00:11:08.440 }' 00:11:08.440 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.440 11:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.746 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.746 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:08.746 11:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.746 11:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.746 11:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.746 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:08.746 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:08.746 11:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.746 11:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.746 [2024-11-20 11:20:51.796537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:09.006 11:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.006 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:09.006 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.006 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.006 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.006 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.006 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:09.006 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.006 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.006 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.006 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.006 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.006 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.006 11:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.006 11:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.006 11:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.006 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.006 "name": "Existed_Raid", 00:11:09.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.006 "strip_size_kb": 0, 00:11:09.006 "state": "configuring", 00:11:09.006 "raid_level": "raid1", 00:11:09.006 "superblock": false, 00:11:09.006 "num_base_bdevs": 3, 00:11:09.006 "num_base_bdevs_discovered": 1, 00:11:09.006 "num_base_bdevs_operational": 3, 00:11:09.006 "base_bdevs_list": [ 00:11:09.006 { 00:11:09.006 "name": null, 00:11:09.006 "uuid": "62cc6d09-b02e-45e0-8c46-18d90dd07a1e", 00:11:09.006 "is_configured": false, 00:11:09.006 "data_offset": 0, 00:11:09.006 "data_size": 65536 00:11:09.006 }, 00:11:09.006 { 00:11:09.006 "name": null, 00:11:09.006 "uuid": "5d22dbc3-03ee-47b2-9b68-0ebcca8c7fa5", 00:11:09.006 "is_configured": false, 00:11:09.006 "data_offset": 0, 00:11:09.006 "data_size": 65536 00:11:09.006 }, 00:11:09.006 { 00:11:09.006 "name": "BaseBdev3", 00:11:09.006 "uuid": "d6301dbd-672f-4d57-baea-bf63ae06960b", 00:11:09.006 "is_configured": true, 00:11:09.006 "data_offset": 0, 00:11:09.006 "data_size": 65536 00:11:09.006 } 00:11:09.006 ] 00:11:09.006 }' 00:11:09.006 11:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.006 11:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.266 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:09.266 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.266 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.266 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.266 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.266 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:09.266 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:09.266 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.266 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.266 [2024-11-20 11:20:52.362419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.266 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.266 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:09.266 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.266 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.266 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.266 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.266 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:09.266 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.266 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.266 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.266 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.266 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.266 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.266 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.266 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.526 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.526 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.526 "name": "Existed_Raid", 00:11:09.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.526 "strip_size_kb": 0, 00:11:09.526 "state": "configuring", 00:11:09.526 "raid_level": "raid1", 00:11:09.526 "superblock": false, 00:11:09.526 "num_base_bdevs": 3, 00:11:09.526 "num_base_bdevs_discovered": 2, 00:11:09.526 "num_base_bdevs_operational": 3, 00:11:09.526 "base_bdevs_list": [ 00:11:09.526 { 00:11:09.526 "name": null, 00:11:09.526 "uuid": "62cc6d09-b02e-45e0-8c46-18d90dd07a1e", 00:11:09.526 "is_configured": false, 00:11:09.526 "data_offset": 0, 00:11:09.526 "data_size": 65536 00:11:09.526 }, 00:11:09.526 { 00:11:09.526 "name": "BaseBdev2", 00:11:09.526 "uuid": "5d22dbc3-03ee-47b2-9b68-0ebcca8c7fa5", 00:11:09.526 "is_configured": true, 00:11:09.526 "data_offset": 0, 00:11:09.526 "data_size": 65536 00:11:09.526 }, 00:11:09.526 { 00:11:09.526 "name": "BaseBdev3", 00:11:09.526 "uuid": "d6301dbd-672f-4d57-baea-bf63ae06960b", 00:11:09.526 "is_configured": true, 00:11:09.526 "data_offset": 0, 00:11:09.526 "data_size": 65536 00:11:09.526 } 00:11:09.526 ] 00:11:09.526 }' 00:11:09.526 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.526 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.785 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:09.785 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.785 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.785 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.785 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.785 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:09.785 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.785 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:09.785 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.785 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.785 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 62cc6d09-b02e-45e0-8c46-18d90dd07a1e 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.046 [2024-11-20 11:20:52.945844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:10.046 [2024-11-20 11:20:52.945988] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:10.046 [2024-11-20 11:20:52.946014] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:10.046 [2024-11-20 11:20:52.946282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:10.046 [2024-11-20 11:20:52.946499] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:10.046 [2024-11-20 11:20:52.946547] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:10.046 [2024-11-20 11:20:52.946834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.046 NewBaseBdev 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.046 [ 00:11:10.046 { 00:11:10.046 "name": "NewBaseBdev", 00:11:10.046 "aliases": [ 00:11:10.046 "62cc6d09-b02e-45e0-8c46-18d90dd07a1e" 00:11:10.046 ], 00:11:10.046 "product_name": "Malloc disk", 00:11:10.046 "block_size": 512, 00:11:10.046 "num_blocks": 65536, 00:11:10.046 "uuid": "62cc6d09-b02e-45e0-8c46-18d90dd07a1e", 00:11:10.046 "assigned_rate_limits": { 00:11:10.046 "rw_ios_per_sec": 0, 00:11:10.046 "rw_mbytes_per_sec": 0, 00:11:10.046 "r_mbytes_per_sec": 0, 00:11:10.046 "w_mbytes_per_sec": 0 00:11:10.046 }, 00:11:10.046 "claimed": true, 00:11:10.046 "claim_type": "exclusive_write", 00:11:10.046 "zoned": false, 00:11:10.046 "supported_io_types": { 00:11:10.046 "read": true, 00:11:10.046 "write": true, 00:11:10.046 "unmap": true, 00:11:10.046 "flush": true, 00:11:10.046 "reset": true, 00:11:10.046 "nvme_admin": false, 00:11:10.046 "nvme_io": false, 00:11:10.046 "nvme_io_md": false, 00:11:10.046 "write_zeroes": true, 00:11:10.046 "zcopy": true, 00:11:10.046 "get_zone_info": false, 00:11:10.046 "zone_management": false, 00:11:10.046 "zone_append": false, 00:11:10.046 "compare": false, 00:11:10.046 "compare_and_write": false, 00:11:10.046 "abort": true, 00:11:10.046 "seek_hole": false, 00:11:10.046 "seek_data": false, 00:11:10.046 "copy": true, 00:11:10.046 "nvme_iov_md": false 00:11:10.046 }, 00:11:10.046 "memory_domains": [ 00:11:10.046 { 00:11:10.046 "dma_device_id": "system", 00:11:10.046 "dma_device_type": 1 00:11:10.046 }, 00:11:10.046 { 00:11:10.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.046 "dma_device_type": 2 00:11:10.046 } 00:11:10.046 ], 00:11:10.046 "driver_specific": {} 00:11:10.046 } 00:11:10.046 ] 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.046 11:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.046 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.046 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.046 "name": "Existed_Raid", 00:11:10.046 "uuid": "7cbce0a9-f236-49a1-b292-de2b7bedd87d", 00:11:10.046 "strip_size_kb": 0, 00:11:10.046 "state": "online", 00:11:10.046 "raid_level": "raid1", 00:11:10.046 "superblock": false, 00:11:10.046 "num_base_bdevs": 3, 00:11:10.046 "num_base_bdevs_discovered": 3, 00:11:10.046 "num_base_bdevs_operational": 3, 00:11:10.046 "base_bdevs_list": [ 00:11:10.046 { 00:11:10.046 "name": "NewBaseBdev", 00:11:10.046 "uuid": "62cc6d09-b02e-45e0-8c46-18d90dd07a1e", 00:11:10.046 "is_configured": true, 00:11:10.046 "data_offset": 0, 00:11:10.046 "data_size": 65536 00:11:10.046 }, 00:11:10.046 { 00:11:10.046 "name": "BaseBdev2", 00:11:10.046 "uuid": "5d22dbc3-03ee-47b2-9b68-0ebcca8c7fa5", 00:11:10.046 "is_configured": true, 00:11:10.046 "data_offset": 0, 00:11:10.046 "data_size": 65536 00:11:10.046 }, 00:11:10.046 { 00:11:10.046 "name": "BaseBdev3", 00:11:10.046 "uuid": "d6301dbd-672f-4d57-baea-bf63ae06960b", 00:11:10.046 "is_configured": true, 00:11:10.046 "data_offset": 0, 00:11:10.046 "data_size": 65536 00:11:10.046 } 00:11:10.046 ] 00:11:10.046 }' 00:11:10.046 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.046 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.306 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:10.306 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:10.306 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:10.306 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:10.306 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:10.306 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:10.306 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:10.306 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:10.306 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.306 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.306 [2024-11-20 11:20:53.401508] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.566 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.566 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:10.566 "name": "Existed_Raid", 00:11:10.566 "aliases": [ 00:11:10.566 "7cbce0a9-f236-49a1-b292-de2b7bedd87d" 00:11:10.566 ], 00:11:10.566 "product_name": "Raid Volume", 00:11:10.566 "block_size": 512, 00:11:10.566 "num_blocks": 65536, 00:11:10.566 "uuid": "7cbce0a9-f236-49a1-b292-de2b7bedd87d", 00:11:10.566 "assigned_rate_limits": { 00:11:10.566 "rw_ios_per_sec": 0, 00:11:10.566 "rw_mbytes_per_sec": 0, 00:11:10.566 "r_mbytes_per_sec": 0, 00:11:10.566 "w_mbytes_per_sec": 0 00:11:10.566 }, 00:11:10.566 "claimed": false, 00:11:10.566 "zoned": false, 00:11:10.566 "supported_io_types": { 00:11:10.566 "read": true, 00:11:10.567 "write": true, 00:11:10.567 "unmap": false, 00:11:10.567 "flush": false, 00:11:10.567 "reset": true, 00:11:10.567 "nvme_admin": false, 00:11:10.567 "nvme_io": false, 00:11:10.567 "nvme_io_md": false, 00:11:10.567 "write_zeroes": true, 00:11:10.567 "zcopy": false, 00:11:10.567 "get_zone_info": false, 00:11:10.567 "zone_management": false, 00:11:10.567 "zone_append": false, 00:11:10.567 "compare": false, 00:11:10.567 "compare_and_write": false, 00:11:10.567 "abort": false, 00:11:10.567 "seek_hole": false, 00:11:10.567 "seek_data": false, 00:11:10.567 "copy": false, 00:11:10.567 "nvme_iov_md": false 00:11:10.567 }, 00:11:10.567 "memory_domains": [ 00:11:10.567 { 00:11:10.567 "dma_device_id": "system", 00:11:10.567 "dma_device_type": 1 00:11:10.567 }, 00:11:10.567 { 00:11:10.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.567 "dma_device_type": 2 00:11:10.567 }, 00:11:10.567 { 00:11:10.567 "dma_device_id": "system", 00:11:10.567 "dma_device_type": 1 00:11:10.567 }, 00:11:10.567 { 00:11:10.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.567 "dma_device_type": 2 00:11:10.567 }, 00:11:10.567 { 00:11:10.567 "dma_device_id": "system", 00:11:10.567 "dma_device_type": 1 00:11:10.567 }, 00:11:10.567 { 00:11:10.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.567 "dma_device_type": 2 00:11:10.567 } 00:11:10.567 ], 00:11:10.567 "driver_specific": { 00:11:10.567 "raid": { 00:11:10.567 "uuid": "7cbce0a9-f236-49a1-b292-de2b7bedd87d", 00:11:10.567 "strip_size_kb": 0, 00:11:10.567 "state": "online", 00:11:10.567 "raid_level": "raid1", 00:11:10.567 "superblock": false, 00:11:10.567 "num_base_bdevs": 3, 00:11:10.567 "num_base_bdevs_discovered": 3, 00:11:10.567 "num_base_bdevs_operational": 3, 00:11:10.567 "base_bdevs_list": [ 00:11:10.567 { 00:11:10.567 "name": "NewBaseBdev", 00:11:10.567 "uuid": "62cc6d09-b02e-45e0-8c46-18d90dd07a1e", 00:11:10.567 "is_configured": true, 00:11:10.567 "data_offset": 0, 00:11:10.567 "data_size": 65536 00:11:10.567 }, 00:11:10.567 { 00:11:10.567 "name": "BaseBdev2", 00:11:10.567 "uuid": "5d22dbc3-03ee-47b2-9b68-0ebcca8c7fa5", 00:11:10.567 "is_configured": true, 00:11:10.567 "data_offset": 0, 00:11:10.567 "data_size": 65536 00:11:10.567 }, 00:11:10.567 { 00:11:10.567 "name": "BaseBdev3", 00:11:10.567 "uuid": "d6301dbd-672f-4d57-baea-bf63ae06960b", 00:11:10.567 "is_configured": true, 00:11:10.567 "data_offset": 0, 00:11:10.567 "data_size": 65536 00:11:10.567 } 00:11:10.567 ] 00:11:10.567 } 00:11:10.567 } 00:11:10.567 }' 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:10.567 BaseBdev2 00:11:10.567 BaseBdev3' 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.567 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.826 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.826 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.826 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:10.826 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.826 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.826 [2024-11-20 11:20:53.700641] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:10.826 [2024-11-20 11:20:53.700678] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.826 [2024-11-20 11:20:53.700778] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.826 [2024-11-20 11:20:53.701110] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.826 [2024-11-20 11:20:53.701123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:10.826 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.826 11:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67530 00:11:10.826 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67530 ']' 00:11:10.826 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67530 00:11:10.826 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:10.826 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.826 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67530 00:11:10.826 killing process with pid 67530 00:11:10.826 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.826 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.826 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67530' 00:11:10.826 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67530 00:11:10.826 [2024-11-20 11:20:53.749496] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:10.826 11:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67530 00:11:11.085 [2024-11-20 11:20:54.057445] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:12.462 ************************************ 00:11:12.462 END TEST raid_state_function_test 00:11:12.462 ************************************ 00:11:12.462 00:11:12.462 real 0m10.908s 00:11:12.462 user 0m17.403s 00:11:12.462 sys 0m1.936s 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.462 11:20:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:11:12.462 11:20:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:12.462 11:20:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.462 11:20:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:12.462 ************************************ 00:11:12.462 START TEST raid_state_function_test_sb 00:11:12.462 ************************************ 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:12.462 Process raid pid: 68157 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68157 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68157' 00:11:12.462 11:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68157 00:11:12.463 11:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68157 ']' 00:11:12.463 11:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.463 11:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.463 11:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.463 11:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.463 11:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.463 [2024-11-20 11:20:55.350671] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:11:12.463 [2024-11-20 11:20:55.350839] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.463 [2024-11-20 11:20:55.528497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.720 [2024-11-20 11:20:55.645188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.978 [2024-11-20 11:20:55.857002] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.978 [2024-11-20 11:20:55.857069] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.236 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.236 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:13.236 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:13.236 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.236 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.236 [2024-11-20 11:20:56.251975] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:13.236 [2024-11-20 11:20:56.252047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:13.236 [2024-11-20 11:20:56.252058] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.236 [2024-11-20 11:20:56.252069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.236 [2024-11-20 11:20:56.252075] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:13.236 [2024-11-20 11:20:56.252084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:13.236 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.236 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:13.236 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.236 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.236 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.236 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.236 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.236 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.236 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.236 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.236 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.236 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.236 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.236 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.236 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.236 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.236 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.236 "name": "Existed_Raid", 00:11:13.236 "uuid": "70f9c90c-535e-40a7-8a43-6937f32e95f0", 00:11:13.236 "strip_size_kb": 0, 00:11:13.236 "state": "configuring", 00:11:13.236 "raid_level": "raid1", 00:11:13.236 "superblock": true, 00:11:13.236 "num_base_bdevs": 3, 00:11:13.236 "num_base_bdevs_discovered": 0, 00:11:13.236 "num_base_bdevs_operational": 3, 00:11:13.236 "base_bdevs_list": [ 00:11:13.236 { 00:11:13.236 "name": "BaseBdev1", 00:11:13.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.236 "is_configured": false, 00:11:13.236 "data_offset": 0, 00:11:13.236 "data_size": 0 00:11:13.236 }, 00:11:13.236 { 00:11:13.236 "name": "BaseBdev2", 00:11:13.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.236 "is_configured": false, 00:11:13.236 "data_offset": 0, 00:11:13.236 "data_size": 0 00:11:13.236 }, 00:11:13.236 { 00:11:13.236 "name": "BaseBdev3", 00:11:13.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.236 "is_configured": false, 00:11:13.236 "data_offset": 0, 00:11:13.236 "data_size": 0 00:11:13.236 } 00:11:13.236 ] 00:11:13.236 }' 00:11:13.236 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.236 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.802 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:13.802 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.802 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.802 [2024-11-20 11:20:56.691162] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:13.802 [2024-11-20 11:20:56.691265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:13.802 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.802 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:13.802 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.802 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.802 [2024-11-20 11:20:56.703165] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:13.802 [2024-11-20 11:20:56.703215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:13.802 [2024-11-20 11:20:56.703225] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.802 [2024-11-20 11:20:56.703235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.802 [2024-11-20 11:20:56.703241] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:13.802 [2024-11-20 11:20:56.703251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:13.802 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.802 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:13.802 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.802 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.802 [2024-11-20 11:20:56.750346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.802 BaseBdev1 00:11:13.802 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.802 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:13.802 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.803 [ 00:11:13.803 { 00:11:13.803 "name": "BaseBdev1", 00:11:13.803 "aliases": [ 00:11:13.803 "39f07238-ef85-4c56-acd5-a420291776b2" 00:11:13.803 ], 00:11:13.803 "product_name": "Malloc disk", 00:11:13.803 "block_size": 512, 00:11:13.803 "num_blocks": 65536, 00:11:13.803 "uuid": "39f07238-ef85-4c56-acd5-a420291776b2", 00:11:13.803 "assigned_rate_limits": { 00:11:13.803 "rw_ios_per_sec": 0, 00:11:13.803 "rw_mbytes_per_sec": 0, 00:11:13.803 "r_mbytes_per_sec": 0, 00:11:13.803 "w_mbytes_per_sec": 0 00:11:13.803 }, 00:11:13.803 "claimed": true, 00:11:13.803 "claim_type": "exclusive_write", 00:11:13.803 "zoned": false, 00:11:13.803 "supported_io_types": { 00:11:13.803 "read": true, 00:11:13.803 "write": true, 00:11:13.803 "unmap": true, 00:11:13.803 "flush": true, 00:11:13.803 "reset": true, 00:11:13.803 "nvme_admin": false, 00:11:13.803 "nvme_io": false, 00:11:13.803 "nvme_io_md": false, 00:11:13.803 "write_zeroes": true, 00:11:13.803 "zcopy": true, 00:11:13.803 "get_zone_info": false, 00:11:13.803 "zone_management": false, 00:11:13.803 "zone_append": false, 00:11:13.803 "compare": false, 00:11:13.803 "compare_and_write": false, 00:11:13.803 "abort": true, 00:11:13.803 "seek_hole": false, 00:11:13.803 "seek_data": false, 00:11:13.803 "copy": true, 00:11:13.803 "nvme_iov_md": false 00:11:13.803 }, 00:11:13.803 "memory_domains": [ 00:11:13.803 { 00:11:13.803 "dma_device_id": "system", 00:11:13.803 "dma_device_type": 1 00:11:13.803 }, 00:11:13.803 { 00:11:13.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.803 "dma_device_type": 2 00:11:13.803 } 00:11:13.803 ], 00:11:13.803 "driver_specific": {} 00:11:13.803 } 00:11:13.803 ] 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.803 "name": "Existed_Raid", 00:11:13.803 "uuid": "b82a4c56-586b-4c90-b7c5-e4dbe09e5183", 00:11:13.803 "strip_size_kb": 0, 00:11:13.803 "state": "configuring", 00:11:13.803 "raid_level": "raid1", 00:11:13.803 "superblock": true, 00:11:13.803 "num_base_bdevs": 3, 00:11:13.803 "num_base_bdevs_discovered": 1, 00:11:13.803 "num_base_bdevs_operational": 3, 00:11:13.803 "base_bdevs_list": [ 00:11:13.803 { 00:11:13.803 "name": "BaseBdev1", 00:11:13.803 "uuid": "39f07238-ef85-4c56-acd5-a420291776b2", 00:11:13.803 "is_configured": true, 00:11:13.803 "data_offset": 2048, 00:11:13.803 "data_size": 63488 00:11:13.803 }, 00:11:13.803 { 00:11:13.803 "name": "BaseBdev2", 00:11:13.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.803 "is_configured": false, 00:11:13.803 "data_offset": 0, 00:11:13.803 "data_size": 0 00:11:13.803 }, 00:11:13.803 { 00:11:13.803 "name": "BaseBdev3", 00:11:13.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.803 "is_configured": false, 00:11:13.803 "data_offset": 0, 00:11:13.803 "data_size": 0 00:11:13.803 } 00:11:13.803 ] 00:11:13.803 }' 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.803 11:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.103 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:14.103 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.103 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.103 [2024-11-20 11:20:57.193635] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:14.103 [2024-11-20 11:20:57.193694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:14.103 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.103 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:14.103 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.103 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.103 [2024-11-20 11:20:57.201694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.103 [2024-11-20 11:20:57.203667] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.103 [2024-11-20 11:20:57.203752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.104 [2024-11-20 11:20:57.203782] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:14.104 [2024-11-20 11:20:57.203807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:14.104 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.104 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:14.104 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:14.104 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:14.104 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.104 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.104 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.104 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.104 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:14.104 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.104 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.104 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.104 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.104 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.104 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.104 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.104 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.363 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.364 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.364 "name": "Existed_Raid", 00:11:14.364 "uuid": "4279d72a-e638-48eb-8603-6f01afb2b673", 00:11:14.364 "strip_size_kb": 0, 00:11:14.364 "state": "configuring", 00:11:14.364 "raid_level": "raid1", 00:11:14.364 "superblock": true, 00:11:14.364 "num_base_bdevs": 3, 00:11:14.364 "num_base_bdevs_discovered": 1, 00:11:14.364 "num_base_bdevs_operational": 3, 00:11:14.364 "base_bdevs_list": [ 00:11:14.364 { 00:11:14.364 "name": "BaseBdev1", 00:11:14.364 "uuid": "39f07238-ef85-4c56-acd5-a420291776b2", 00:11:14.364 "is_configured": true, 00:11:14.364 "data_offset": 2048, 00:11:14.364 "data_size": 63488 00:11:14.364 }, 00:11:14.364 { 00:11:14.364 "name": "BaseBdev2", 00:11:14.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.364 "is_configured": false, 00:11:14.364 "data_offset": 0, 00:11:14.364 "data_size": 0 00:11:14.364 }, 00:11:14.364 { 00:11:14.364 "name": "BaseBdev3", 00:11:14.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.364 "is_configured": false, 00:11:14.364 "data_offset": 0, 00:11:14.364 "data_size": 0 00:11:14.364 } 00:11:14.364 ] 00:11:14.364 }' 00:11:14.364 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.364 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.623 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.624 [2024-11-20 11:20:57.681050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:14.624 BaseBdev2 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.624 [ 00:11:14.624 { 00:11:14.624 "name": "BaseBdev2", 00:11:14.624 "aliases": [ 00:11:14.624 "51d18249-88f2-4e45-9f6d-9cb708209f84" 00:11:14.624 ], 00:11:14.624 "product_name": "Malloc disk", 00:11:14.624 "block_size": 512, 00:11:14.624 "num_blocks": 65536, 00:11:14.624 "uuid": "51d18249-88f2-4e45-9f6d-9cb708209f84", 00:11:14.624 "assigned_rate_limits": { 00:11:14.624 "rw_ios_per_sec": 0, 00:11:14.624 "rw_mbytes_per_sec": 0, 00:11:14.624 "r_mbytes_per_sec": 0, 00:11:14.624 "w_mbytes_per_sec": 0 00:11:14.624 }, 00:11:14.624 "claimed": true, 00:11:14.624 "claim_type": "exclusive_write", 00:11:14.624 "zoned": false, 00:11:14.624 "supported_io_types": { 00:11:14.624 "read": true, 00:11:14.624 "write": true, 00:11:14.624 "unmap": true, 00:11:14.624 "flush": true, 00:11:14.624 "reset": true, 00:11:14.624 "nvme_admin": false, 00:11:14.624 "nvme_io": false, 00:11:14.624 "nvme_io_md": false, 00:11:14.624 "write_zeroes": true, 00:11:14.624 "zcopy": true, 00:11:14.624 "get_zone_info": false, 00:11:14.624 "zone_management": false, 00:11:14.624 "zone_append": false, 00:11:14.624 "compare": false, 00:11:14.624 "compare_and_write": false, 00:11:14.624 "abort": true, 00:11:14.624 "seek_hole": false, 00:11:14.624 "seek_data": false, 00:11:14.624 "copy": true, 00:11:14.624 "nvme_iov_md": false 00:11:14.624 }, 00:11:14.624 "memory_domains": [ 00:11:14.624 { 00:11:14.624 "dma_device_id": "system", 00:11:14.624 "dma_device_type": 1 00:11:14.624 }, 00:11:14.624 { 00:11:14.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.624 "dma_device_type": 2 00:11:14.624 } 00:11:14.624 ], 00:11:14.624 "driver_specific": {} 00:11:14.624 } 00:11:14.624 ] 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.624 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.884 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.884 "name": "Existed_Raid", 00:11:14.884 "uuid": "4279d72a-e638-48eb-8603-6f01afb2b673", 00:11:14.884 "strip_size_kb": 0, 00:11:14.884 "state": "configuring", 00:11:14.884 "raid_level": "raid1", 00:11:14.884 "superblock": true, 00:11:14.884 "num_base_bdevs": 3, 00:11:14.884 "num_base_bdevs_discovered": 2, 00:11:14.884 "num_base_bdevs_operational": 3, 00:11:14.884 "base_bdevs_list": [ 00:11:14.884 { 00:11:14.884 "name": "BaseBdev1", 00:11:14.884 "uuid": "39f07238-ef85-4c56-acd5-a420291776b2", 00:11:14.884 "is_configured": true, 00:11:14.884 "data_offset": 2048, 00:11:14.884 "data_size": 63488 00:11:14.884 }, 00:11:14.884 { 00:11:14.884 "name": "BaseBdev2", 00:11:14.884 "uuid": "51d18249-88f2-4e45-9f6d-9cb708209f84", 00:11:14.884 "is_configured": true, 00:11:14.884 "data_offset": 2048, 00:11:14.884 "data_size": 63488 00:11:14.884 }, 00:11:14.884 { 00:11:14.884 "name": "BaseBdev3", 00:11:14.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.884 "is_configured": false, 00:11:14.884 "data_offset": 0, 00:11:14.884 "data_size": 0 00:11:14.884 } 00:11:14.884 ] 00:11:14.884 }' 00:11:14.884 11:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.884 11:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.143 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:15.143 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.143 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.143 [2024-11-20 11:20:58.226111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:15.143 [2024-11-20 11:20:58.226498] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:15.143 [2024-11-20 11:20:58.226563] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:15.143 [2024-11-20 11:20:58.226866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:15.143 [2024-11-20 11:20:58.227070] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:15.143 [2024-11-20 11:20:58.227113] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:15.143 BaseBdev3 00:11:15.143 [2024-11-20 11:20:58.227295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.143 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.143 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:15.143 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:15.143 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.143 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:15.143 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.143 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.143 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.143 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.143 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.143 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.143 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:15.143 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.143 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.143 [ 00:11:15.402 { 00:11:15.402 "name": "BaseBdev3", 00:11:15.402 "aliases": [ 00:11:15.402 "fbd824be-de43-43ea-b6a6-9f4e30827c64" 00:11:15.402 ], 00:11:15.402 "product_name": "Malloc disk", 00:11:15.402 "block_size": 512, 00:11:15.402 "num_blocks": 65536, 00:11:15.402 "uuid": "fbd824be-de43-43ea-b6a6-9f4e30827c64", 00:11:15.402 "assigned_rate_limits": { 00:11:15.402 "rw_ios_per_sec": 0, 00:11:15.402 "rw_mbytes_per_sec": 0, 00:11:15.402 "r_mbytes_per_sec": 0, 00:11:15.402 "w_mbytes_per_sec": 0 00:11:15.402 }, 00:11:15.402 "claimed": true, 00:11:15.402 "claim_type": "exclusive_write", 00:11:15.402 "zoned": false, 00:11:15.402 "supported_io_types": { 00:11:15.402 "read": true, 00:11:15.402 "write": true, 00:11:15.402 "unmap": true, 00:11:15.402 "flush": true, 00:11:15.402 "reset": true, 00:11:15.402 "nvme_admin": false, 00:11:15.402 "nvme_io": false, 00:11:15.402 "nvme_io_md": false, 00:11:15.402 "write_zeroes": true, 00:11:15.402 "zcopy": true, 00:11:15.402 "get_zone_info": false, 00:11:15.402 "zone_management": false, 00:11:15.402 "zone_append": false, 00:11:15.402 "compare": false, 00:11:15.402 "compare_and_write": false, 00:11:15.402 "abort": true, 00:11:15.402 "seek_hole": false, 00:11:15.402 "seek_data": false, 00:11:15.402 "copy": true, 00:11:15.402 "nvme_iov_md": false 00:11:15.402 }, 00:11:15.402 "memory_domains": [ 00:11:15.402 { 00:11:15.402 "dma_device_id": "system", 00:11:15.402 "dma_device_type": 1 00:11:15.402 }, 00:11:15.402 { 00:11:15.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.402 "dma_device_type": 2 00:11:15.402 } 00:11:15.402 ], 00:11:15.402 "driver_specific": {} 00:11:15.402 } 00:11:15.402 ] 00:11:15.402 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.402 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:15.402 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:15.402 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.402 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:15.402 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.402 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.402 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.402 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.402 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.402 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.402 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.402 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.402 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.402 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.402 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.402 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.402 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.402 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.403 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.403 "name": "Existed_Raid", 00:11:15.403 "uuid": "4279d72a-e638-48eb-8603-6f01afb2b673", 00:11:15.403 "strip_size_kb": 0, 00:11:15.403 "state": "online", 00:11:15.403 "raid_level": "raid1", 00:11:15.403 "superblock": true, 00:11:15.403 "num_base_bdevs": 3, 00:11:15.403 "num_base_bdevs_discovered": 3, 00:11:15.403 "num_base_bdevs_operational": 3, 00:11:15.403 "base_bdevs_list": [ 00:11:15.403 { 00:11:15.403 "name": "BaseBdev1", 00:11:15.403 "uuid": "39f07238-ef85-4c56-acd5-a420291776b2", 00:11:15.403 "is_configured": true, 00:11:15.403 "data_offset": 2048, 00:11:15.403 "data_size": 63488 00:11:15.403 }, 00:11:15.403 { 00:11:15.403 "name": "BaseBdev2", 00:11:15.403 "uuid": "51d18249-88f2-4e45-9f6d-9cb708209f84", 00:11:15.403 "is_configured": true, 00:11:15.403 "data_offset": 2048, 00:11:15.403 "data_size": 63488 00:11:15.403 }, 00:11:15.403 { 00:11:15.403 "name": "BaseBdev3", 00:11:15.403 "uuid": "fbd824be-de43-43ea-b6a6-9f4e30827c64", 00:11:15.403 "is_configured": true, 00:11:15.403 "data_offset": 2048, 00:11:15.403 "data_size": 63488 00:11:15.403 } 00:11:15.403 ] 00:11:15.403 }' 00:11:15.403 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.403 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.663 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:15.663 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:15.663 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:15.663 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:15.663 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:15.663 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:15.663 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:15.663 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.663 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.663 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:15.663 [2024-11-20 11:20:58.737708] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.663 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.663 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:15.663 "name": "Existed_Raid", 00:11:15.663 "aliases": [ 00:11:15.663 "4279d72a-e638-48eb-8603-6f01afb2b673" 00:11:15.663 ], 00:11:15.663 "product_name": "Raid Volume", 00:11:15.663 "block_size": 512, 00:11:15.663 "num_blocks": 63488, 00:11:15.663 "uuid": "4279d72a-e638-48eb-8603-6f01afb2b673", 00:11:15.663 "assigned_rate_limits": { 00:11:15.663 "rw_ios_per_sec": 0, 00:11:15.663 "rw_mbytes_per_sec": 0, 00:11:15.663 "r_mbytes_per_sec": 0, 00:11:15.663 "w_mbytes_per_sec": 0 00:11:15.663 }, 00:11:15.663 "claimed": false, 00:11:15.663 "zoned": false, 00:11:15.663 "supported_io_types": { 00:11:15.663 "read": true, 00:11:15.663 "write": true, 00:11:15.663 "unmap": false, 00:11:15.663 "flush": false, 00:11:15.663 "reset": true, 00:11:15.663 "nvme_admin": false, 00:11:15.663 "nvme_io": false, 00:11:15.663 "nvme_io_md": false, 00:11:15.663 "write_zeroes": true, 00:11:15.663 "zcopy": false, 00:11:15.663 "get_zone_info": false, 00:11:15.663 "zone_management": false, 00:11:15.663 "zone_append": false, 00:11:15.663 "compare": false, 00:11:15.663 "compare_and_write": false, 00:11:15.663 "abort": false, 00:11:15.663 "seek_hole": false, 00:11:15.663 "seek_data": false, 00:11:15.663 "copy": false, 00:11:15.663 "nvme_iov_md": false 00:11:15.663 }, 00:11:15.663 "memory_domains": [ 00:11:15.663 { 00:11:15.663 "dma_device_id": "system", 00:11:15.663 "dma_device_type": 1 00:11:15.663 }, 00:11:15.663 { 00:11:15.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.663 "dma_device_type": 2 00:11:15.663 }, 00:11:15.663 { 00:11:15.663 "dma_device_id": "system", 00:11:15.663 "dma_device_type": 1 00:11:15.663 }, 00:11:15.663 { 00:11:15.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.663 "dma_device_type": 2 00:11:15.663 }, 00:11:15.663 { 00:11:15.663 "dma_device_id": "system", 00:11:15.663 "dma_device_type": 1 00:11:15.663 }, 00:11:15.663 { 00:11:15.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.663 "dma_device_type": 2 00:11:15.663 } 00:11:15.663 ], 00:11:15.663 "driver_specific": { 00:11:15.663 "raid": { 00:11:15.663 "uuid": "4279d72a-e638-48eb-8603-6f01afb2b673", 00:11:15.663 "strip_size_kb": 0, 00:11:15.663 "state": "online", 00:11:15.663 "raid_level": "raid1", 00:11:15.663 "superblock": true, 00:11:15.663 "num_base_bdevs": 3, 00:11:15.663 "num_base_bdevs_discovered": 3, 00:11:15.663 "num_base_bdevs_operational": 3, 00:11:15.663 "base_bdevs_list": [ 00:11:15.663 { 00:11:15.663 "name": "BaseBdev1", 00:11:15.663 "uuid": "39f07238-ef85-4c56-acd5-a420291776b2", 00:11:15.663 "is_configured": true, 00:11:15.663 "data_offset": 2048, 00:11:15.663 "data_size": 63488 00:11:15.663 }, 00:11:15.663 { 00:11:15.663 "name": "BaseBdev2", 00:11:15.663 "uuid": "51d18249-88f2-4e45-9f6d-9cb708209f84", 00:11:15.663 "is_configured": true, 00:11:15.663 "data_offset": 2048, 00:11:15.663 "data_size": 63488 00:11:15.663 }, 00:11:15.663 { 00:11:15.663 "name": "BaseBdev3", 00:11:15.663 "uuid": "fbd824be-de43-43ea-b6a6-9f4e30827c64", 00:11:15.663 "is_configured": true, 00:11:15.663 "data_offset": 2048, 00:11:15.663 "data_size": 63488 00:11:15.663 } 00:11:15.663 ] 00:11:15.663 } 00:11:15.663 } 00:11:15.663 }' 00:11:15.923 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:15.923 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:15.923 BaseBdev2 00:11:15.923 BaseBdev3' 00:11:15.923 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.923 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:15.923 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.923 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:15.923 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.923 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.923 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.923 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.923 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.923 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.923 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.924 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:15.924 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.924 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.924 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.924 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.924 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.924 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.924 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.924 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.924 11:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:15.924 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.924 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.924 11:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.924 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.924 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.924 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:15.924 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.924 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.924 [2024-11-20 11:20:59.012937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:16.184 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.184 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:16.184 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:16.184 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:16.184 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:16.184 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:16.184 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:16.184 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.184 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.184 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.184 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.184 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:16.184 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.184 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.184 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.184 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.184 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.184 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.184 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.184 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.184 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.184 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.184 "name": "Existed_Raid", 00:11:16.184 "uuid": "4279d72a-e638-48eb-8603-6f01afb2b673", 00:11:16.184 "strip_size_kb": 0, 00:11:16.184 "state": "online", 00:11:16.184 "raid_level": "raid1", 00:11:16.184 "superblock": true, 00:11:16.184 "num_base_bdevs": 3, 00:11:16.184 "num_base_bdevs_discovered": 2, 00:11:16.184 "num_base_bdevs_operational": 2, 00:11:16.184 "base_bdevs_list": [ 00:11:16.184 { 00:11:16.184 "name": null, 00:11:16.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.184 "is_configured": false, 00:11:16.184 "data_offset": 0, 00:11:16.184 "data_size": 63488 00:11:16.184 }, 00:11:16.184 { 00:11:16.184 "name": "BaseBdev2", 00:11:16.184 "uuid": "51d18249-88f2-4e45-9f6d-9cb708209f84", 00:11:16.184 "is_configured": true, 00:11:16.184 "data_offset": 2048, 00:11:16.184 "data_size": 63488 00:11:16.184 }, 00:11:16.184 { 00:11:16.184 "name": "BaseBdev3", 00:11:16.184 "uuid": "fbd824be-de43-43ea-b6a6-9f4e30827c64", 00:11:16.184 "is_configured": true, 00:11:16.184 "data_offset": 2048, 00:11:16.184 "data_size": 63488 00:11:16.184 } 00:11:16.184 ] 00:11:16.184 }' 00:11:16.184 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.184 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.444 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:16.444 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:16.444 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.444 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.444 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.444 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:16.444 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.704 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:16.704 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:16.704 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:16.704 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.704 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.704 [2024-11-20 11:20:59.574078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:16.704 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.704 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:16.704 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:16.704 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.704 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:16.704 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.704 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.704 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.704 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:16.704 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:16.704 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:16.704 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.704 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.704 [2024-11-20 11:20:59.733825] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:16.704 [2024-11-20 11:20:59.733936] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:16.964 [2024-11-20 11:20:59.837057] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.964 [2024-11-20 11:20:59.837137] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:16.964 [2024-11-20 11:20:59.837152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.965 BaseBdev2 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.965 [ 00:11:16.965 { 00:11:16.965 "name": "BaseBdev2", 00:11:16.965 "aliases": [ 00:11:16.965 "47f35377-3608-4aa4-b4a9-2ac2ed0df8ff" 00:11:16.965 ], 00:11:16.965 "product_name": "Malloc disk", 00:11:16.965 "block_size": 512, 00:11:16.965 "num_blocks": 65536, 00:11:16.965 "uuid": "47f35377-3608-4aa4-b4a9-2ac2ed0df8ff", 00:11:16.965 "assigned_rate_limits": { 00:11:16.965 "rw_ios_per_sec": 0, 00:11:16.965 "rw_mbytes_per_sec": 0, 00:11:16.965 "r_mbytes_per_sec": 0, 00:11:16.965 "w_mbytes_per_sec": 0 00:11:16.965 }, 00:11:16.965 "claimed": false, 00:11:16.965 "zoned": false, 00:11:16.965 "supported_io_types": { 00:11:16.965 "read": true, 00:11:16.965 "write": true, 00:11:16.965 "unmap": true, 00:11:16.965 "flush": true, 00:11:16.965 "reset": true, 00:11:16.965 "nvme_admin": false, 00:11:16.965 "nvme_io": false, 00:11:16.965 "nvme_io_md": false, 00:11:16.965 "write_zeroes": true, 00:11:16.965 "zcopy": true, 00:11:16.965 "get_zone_info": false, 00:11:16.965 "zone_management": false, 00:11:16.965 "zone_append": false, 00:11:16.965 "compare": false, 00:11:16.965 "compare_and_write": false, 00:11:16.965 "abort": true, 00:11:16.965 "seek_hole": false, 00:11:16.965 "seek_data": false, 00:11:16.965 "copy": true, 00:11:16.965 "nvme_iov_md": false 00:11:16.965 }, 00:11:16.965 "memory_domains": [ 00:11:16.965 { 00:11:16.965 "dma_device_id": "system", 00:11:16.965 "dma_device_type": 1 00:11:16.965 }, 00:11:16.965 { 00:11:16.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.965 "dma_device_type": 2 00:11:16.965 } 00:11:16.965 ], 00:11:16.965 "driver_specific": {} 00:11:16.965 } 00:11:16.965 ] 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.965 11:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.965 BaseBdev3 00:11:16.965 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.965 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:16.965 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:16.965 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.965 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:16.965 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.965 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.965 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.965 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.965 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.965 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.965 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:16.965 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.965 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.965 [ 00:11:16.965 { 00:11:16.965 "name": "BaseBdev3", 00:11:16.965 "aliases": [ 00:11:16.965 "4042a983-ab7d-4071-81a8-63b735276356" 00:11:16.965 ], 00:11:16.965 "product_name": "Malloc disk", 00:11:16.965 "block_size": 512, 00:11:16.965 "num_blocks": 65536, 00:11:16.965 "uuid": "4042a983-ab7d-4071-81a8-63b735276356", 00:11:16.965 "assigned_rate_limits": { 00:11:16.965 "rw_ios_per_sec": 0, 00:11:16.965 "rw_mbytes_per_sec": 0, 00:11:16.965 "r_mbytes_per_sec": 0, 00:11:16.965 "w_mbytes_per_sec": 0 00:11:16.965 }, 00:11:16.965 "claimed": false, 00:11:16.965 "zoned": false, 00:11:16.965 "supported_io_types": { 00:11:16.965 "read": true, 00:11:16.965 "write": true, 00:11:16.965 "unmap": true, 00:11:16.965 "flush": true, 00:11:16.965 "reset": true, 00:11:16.965 "nvme_admin": false, 00:11:16.965 "nvme_io": false, 00:11:16.965 "nvme_io_md": false, 00:11:16.965 "write_zeroes": true, 00:11:16.965 "zcopy": true, 00:11:16.965 "get_zone_info": false, 00:11:16.965 "zone_management": false, 00:11:16.965 "zone_append": false, 00:11:16.965 "compare": false, 00:11:16.965 "compare_and_write": false, 00:11:16.965 "abort": true, 00:11:16.965 "seek_hole": false, 00:11:16.965 "seek_data": false, 00:11:16.965 "copy": true, 00:11:16.965 "nvme_iov_md": false 00:11:16.965 }, 00:11:16.965 "memory_domains": [ 00:11:16.965 { 00:11:16.965 "dma_device_id": "system", 00:11:16.965 "dma_device_type": 1 00:11:16.965 }, 00:11:16.965 { 00:11:16.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.965 "dma_device_type": 2 00:11:16.965 } 00:11:16.965 ], 00:11:16.965 "driver_specific": {} 00:11:16.965 } 00:11:16.965 ] 00:11:16.965 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.966 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:16.966 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:16.966 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:16.966 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:16.966 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.966 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.966 [2024-11-20 11:21:00.057588] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:16.966 [2024-11-20 11:21:00.057697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:16.966 [2024-11-20 11:21:00.057742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:16.966 [2024-11-20 11:21:00.059761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:16.966 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.966 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:16.966 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.966 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.966 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.966 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.966 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.966 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.966 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.966 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.966 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.966 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.966 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.966 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.966 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.226 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.226 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.226 "name": "Existed_Raid", 00:11:17.226 "uuid": "eb45f58f-690c-456b-96f6-56c332414a9c", 00:11:17.226 "strip_size_kb": 0, 00:11:17.226 "state": "configuring", 00:11:17.226 "raid_level": "raid1", 00:11:17.226 "superblock": true, 00:11:17.226 "num_base_bdevs": 3, 00:11:17.226 "num_base_bdevs_discovered": 2, 00:11:17.226 "num_base_bdevs_operational": 3, 00:11:17.226 "base_bdevs_list": [ 00:11:17.226 { 00:11:17.226 "name": "BaseBdev1", 00:11:17.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.226 "is_configured": false, 00:11:17.226 "data_offset": 0, 00:11:17.226 "data_size": 0 00:11:17.226 }, 00:11:17.226 { 00:11:17.226 "name": "BaseBdev2", 00:11:17.226 "uuid": "47f35377-3608-4aa4-b4a9-2ac2ed0df8ff", 00:11:17.226 "is_configured": true, 00:11:17.226 "data_offset": 2048, 00:11:17.226 "data_size": 63488 00:11:17.226 }, 00:11:17.226 { 00:11:17.226 "name": "BaseBdev3", 00:11:17.226 "uuid": "4042a983-ab7d-4071-81a8-63b735276356", 00:11:17.226 "is_configured": true, 00:11:17.226 "data_offset": 2048, 00:11:17.226 "data_size": 63488 00:11:17.226 } 00:11:17.226 ] 00:11:17.226 }' 00:11:17.226 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.226 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.487 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:17.487 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.487 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.487 [2024-11-20 11:21:00.484882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:17.487 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.487 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:17.487 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.487 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.487 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.487 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.487 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.487 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.487 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.487 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.487 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.487 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.487 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.487 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.487 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.487 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.487 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.487 "name": "Existed_Raid", 00:11:17.487 "uuid": "eb45f58f-690c-456b-96f6-56c332414a9c", 00:11:17.487 "strip_size_kb": 0, 00:11:17.487 "state": "configuring", 00:11:17.487 "raid_level": "raid1", 00:11:17.487 "superblock": true, 00:11:17.487 "num_base_bdevs": 3, 00:11:17.487 "num_base_bdevs_discovered": 1, 00:11:17.487 "num_base_bdevs_operational": 3, 00:11:17.487 "base_bdevs_list": [ 00:11:17.487 { 00:11:17.487 "name": "BaseBdev1", 00:11:17.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.487 "is_configured": false, 00:11:17.487 "data_offset": 0, 00:11:17.487 "data_size": 0 00:11:17.487 }, 00:11:17.487 { 00:11:17.487 "name": null, 00:11:17.487 "uuid": "47f35377-3608-4aa4-b4a9-2ac2ed0df8ff", 00:11:17.487 "is_configured": false, 00:11:17.487 "data_offset": 0, 00:11:17.487 "data_size": 63488 00:11:17.487 }, 00:11:17.487 { 00:11:17.487 "name": "BaseBdev3", 00:11:17.487 "uuid": "4042a983-ab7d-4071-81a8-63b735276356", 00:11:17.487 "is_configured": true, 00:11:17.487 "data_offset": 2048, 00:11:17.487 "data_size": 63488 00:11:17.487 } 00:11:17.487 ] 00:11:17.487 }' 00:11:17.487 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.487 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.056 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.056 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.056 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.056 11:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:18.056 11:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.056 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:18.056 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:18.056 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.056 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.056 [2024-11-20 11:21:01.055264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:18.056 BaseBdev1 00:11:18.056 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.056 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:18.056 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:18.056 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.056 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:18.056 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.056 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.056 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.056 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.056 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.056 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.056 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:18.056 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.057 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.057 [ 00:11:18.057 { 00:11:18.057 "name": "BaseBdev1", 00:11:18.057 "aliases": [ 00:11:18.057 "7eaaa1b8-64bc-4cca-adc2-7870f93f49d4" 00:11:18.057 ], 00:11:18.057 "product_name": "Malloc disk", 00:11:18.057 "block_size": 512, 00:11:18.057 "num_blocks": 65536, 00:11:18.057 "uuid": "7eaaa1b8-64bc-4cca-adc2-7870f93f49d4", 00:11:18.057 "assigned_rate_limits": { 00:11:18.057 "rw_ios_per_sec": 0, 00:11:18.057 "rw_mbytes_per_sec": 0, 00:11:18.057 "r_mbytes_per_sec": 0, 00:11:18.057 "w_mbytes_per_sec": 0 00:11:18.057 }, 00:11:18.057 "claimed": true, 00:11:18.057 "claim_type": "exclusive_write", 00:11:18.057 "zoned": false, 00:11:18.057 "supported_io_types": { 00:11:18.057 "read": true, 00:11:18.057 "write": true, 00:11:18.057 "unmap": true, 00:11:18.057 "flush": true, 00:11:18.057 "reset": true, 00:11:18.057 "nvme_admin": false, 00:11:18.057 "nvme_io": false, 00:11:18.057 "nvme_io_md": false, 00:11:18.057 "write_zeroes": true, 00:11:18.057 "zcopy": true, 00:11:18.057 "get_zone_info": false, 00:11:18.057 "zone_management": false, 00:11:18.057 "zone_append": false, 00:11:18.057 "compare": false, 00:11:18.057 "compare_and_write": false, 00:11:18.057 "abort": true, 00:11:18.057 "seek_hole": false, 00:11:18.057 "seek_data": false, 00:11:18.057 "copy": true, 00:11:18.057 "nvme_iov_md": false 00:11:18.057 }, 00:11:18.057 "memory_domains": [ 00:11:18.057 { 00:11:18.057 "dma_device_id": "system", 00:11:18.057 "dma_device_type": 1 00:11:18.057 }, 00:11:18.057 { 00:11:18.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.057 "dma_device_type": 2 00:11:18.057 } 00:11:18.057 ], 00:11:18.057 "driver_specific": {} 00:11:18.057 } 00:11:18.057 ] 00:11:18.057 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.057 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:18.057 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:18.057 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.057 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.057 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.057 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.057 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.057 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.057 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.057 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.057 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.057 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.057 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.057 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.057 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.057 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.057 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.057 "name": "Existed_Raid", 00:11:18.057 "uuid": "eb45f58f-690c-456b-96f6-56c332414a9c", 00:11:18.057 "strip_size_kb": 0, 00:11:18.057 "state": "configuring", 00:11:18.057 "raid_level": "raid1", 00:11:18.057 "superblock": true, 00:11:18.057 "num_base_bdevs": 3, 00:11:18.057 "num_base_bdevs_discovered": 2, 00:11:18.057 "num_base_bdevs_operational": 3, 00:11:18.057 "base_bdevs_list": [ 00:11:18.057 { 00:11:18.057 "name": "BaseBdev1", 00:11:18.057 "uuid": "7eaaa1b8-64bc-4cca-adc2-7870f93f49d4", 00:11:18.057 "is_configured": true, 00:11:18.057 "data_offset": 2048, 00:11:18.057 "data_size": 63488 00:11:18.057 }, 00:11:18.057 { 00:11:18.057 "name": null, 00:11:18.057 "uuid": "47f35377-3608-4aa4-b4a9-2ac2ed0df8ff", 00:11:18.057 "is_configured": false, 00:11:18.057 "data_offset": 0, 00:11:18.057 "data_size": 63488 00:11:18.057 }, 00:11:18.057 { 00:11:18.057 "name": "BaseBdev3", 00:11:18.057 "uuid": "4042a983-ab7d-4071-81a8-63b735276356", 00:11:18.057 "is_configured": true, 00:11:18.057 "data_offset": 2048, 00:11:18.057 "data_size": 63488 00:11:18.057 } 00:11:18.057 ] 00:11:18.057 }' 00:11:18.057 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.057 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.636 [2024-11-20 11:21:01.634364] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.636 "name": "Existed_Raid", 00:11:18.636 "uuid": "eb45f58f-690c-456b-96f6-56c332414a9c", 00:11:18.636 "strip_size_kb": 0, 00:11:18.636 "state": "configuring", 00:11:18.636 "raid_level": "raid1", 00:11:18.636 "superblock": true, 00:11:18.636 "num_base_bdevs": 3, 00:11:18.636 "num_base_bdevs_discovered": 1, 00:11:18.636 "num_base_bdevs_operational": 3, 00:11:18.636 "base_bdevs_list": [ 00:11:18.636 { 00:11:18.636 "name": "BaseBdev1", 00:11:18.636 "uuid": "7eaaa1b8-64bc-4cca-adc2-7870f93f49d4", 00:11:18.636 "is_configured": true, 00:11:18.636 "data_offset": 2048, 00:11:18.636 "data_size": 63488 00:11:18.636 }, 00:11:18.636 { 00:11:18.636 "name": null, 00:11:18.636 "uuid": "47f35377-3608-4aa4-b4a9-2ac2ed0df8ff", 00:11:18.636 "is_configured": false, 00:11:18.636 "data_offset": 0, 00:11:18.636 "data_size": 63488 00:11:18.636 }, 00:11:18.636 { 00:11:18.636 "name": null, 00:11:18.636 "uuid": "4042a983-ab7d-4071-81a8-63b735276356", 00:11:18.636 "is_configured": false, 00:11:18.636 "data_offset": 0, 00:11:18.636 "data_size": 63488 00:11:18.636 } 00:11:18.636 ] 00:11:18.636 }' 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.636 11:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.220 [2024-11-20 11:21:02.093669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.220 "name": "Existed_Raid", 00:11:19.220 "uuid": "eb45f58f-690c-456b-96f6-56c332414a9c", 00:11:19.220 "strip_size_kb": 0, 00:11:19.220 "state": "configuring", 00:11:19.220 "raid_level": "raid1", 00:11:19.220 "superblock": true, 00:11:19.220 "num_base_bdevs": 3, 00:11:19.220 "num_base_bdevs_discovered": 2, 00:11:19.220 "num_base_bdevs_operational": 3, 00:11:19.220 "base_bdevs_list": [ 00:11:19.220 { 00:11:19.220 "name": "BaseBdev1", 00:11:19.220 "uuid": "7eaaa1b8-64bc-4cca-adc2-7870f93f49d4", 00:11:19.220 "is_configured": true, 00:11:19.220 "data_offset": 2048, 00:11:19.220 "data_size": 63488 00:11:19.220 }, 00:11:19.220 { 00:11:19.220 "name": null, 00:11:19.220 "uuid": "47f35377-3608-4aa4-b4a9-2ac2ed0df8ff", 00:11:19.220 "is_configured": false, 00:11:19.220 "data_offset": 0, 00:11:19.220 "data_size": 63488 00:11:19.220 }, 00:11:19.220 { 00:11:19.220 "name": "BaseBdev3", 00:11:19.220 "uuid": "4042a983-ab7d-4071-81a8-63b735276356", 00:11:19.220 "is_configured": true, 00:11:19.220 "data_offset": 2048, 00:11:19.220 "data_size": 63488 00:11:19.220 } 00:11:19.220 ] 00:11:19.220 }' 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.220 11:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.480 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.480 11:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.480 11:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.480 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:19.480 11:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.739 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:19.739 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:19.739 11:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.739 11:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.739 [2024-11-20 11:21:02.612806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:19.739 11:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.739 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:19.739 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.739 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.739 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.739 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.740 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.740 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.740 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.740 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.740 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.740 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.740 11:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.740 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.740 11:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.740 11:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.740 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.740 "name": "Existed_Raid", 00:11:19.740 "uuid": "eb45f58f-690c-456b-96f6-56c332414a9c", 00:11:19.740 "strip_size_kb": 0, 00:11:19.740 "state": "configuring", 00:11:19.740 "raid_level": "raid1", 00:11:19.740 "superblock": true, 00:11:19.740 "num_base_bdevs": 3, 00:11:19.740 "num_base_bdevs_discovered": 1, 00:11:19.740 "num_base_bdevs_operational": 3, 00:11:19.740 "base_bdevs_list": [ 00:11:19.740 { 00:11:19.740 "name": null, 00:11:19.740 "uuid": "7eaaa1b8-64bc-4cca-adc2-7870f93f49d4", 00:11:19.740 "is_configured": false, 00:11:19.740 "data_offset": 0, 00:11:19.740 "data_size": 63488 00:11:19.740 }, 00:11:19.740 { 00:11:19.740 "name": null, 00:11:19.740 "uuid": "47f35377-3608-4aa4-b4a9-2ac2ed0df8ff", 00:11:19.740 "is_configured": false, 00:11:19.740 "data_offset": 0, 00:11:19.740 "data_size": 63488 00:11:19.740 }, 00:11:19.740 { 00:11:19.740 "name": "BaseBdev3", 00:11:19.740 "uuid": "4042a983-ab7d-4071-81a8-63b735276356", 00:11:19.740 "is_configured": true, 00:11:19.740 "data_offset": 2048, 00:11:19.740 "data_size": 63488 00:11:19.740 } 00:11:19.740 ] 00:11:19.740 }' 00:11:19.740 11:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.740 11:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.310 [2024-11-20 11:21:03.177375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.310 "name": "Existed_Raid", 00:11:20.310 "uuid": "eb45f58f-690c-456b-96f6-56c332414a9c", 00:11:20.310 "strip_size_kb": 0, 00:11:20.310 "state": "configuring", 00:11:20.310 "raid_level": "raid1", 00:11:20.310 "superblock": true, 00:11:20.310 "num_base_bdevs": 3, 00:11:20.310 "num_base_bdevs_discovered": 2, 00:11:20.310 "num_base_bdevs_operational": 3, 00:11:20.310 "base_bdevs_list": [ 00:11:20.310 { 00:11:20.310 "name": null, 00:11:20.310 "uuid": "7eaaa1b8-64bc-4cca-adc2-7870f93f49d4", 00:11:20.310 "is_configured": false, 00:11:20.310 "data_offset": 0, 00:11:20.310 "data_size": 63488 00:11:20.310 }, 00:11:20.310 { 00:11:20.310 "name": "BaseBdev2", 00:11:20.310 "uuid": "47f35377-3608-4aa4-b4a9-2ac2ed0df8ff", 00:11:20.310 "is_configured": true, 00:11:20.310 "data_offset": 2048, 00:11:20.310 "data_size": 63488 00:11:20.310 }, 00:11:20.310 { 00:11:20.310 "name": "BaseBdev3", 00:11:20.310 "uuid": "4042a983-ab7d-4071-81a8-63b735276356", 00:11:20.310 "is_configured": true, 00:11:20.310 "data_offset": 2048, 00:11:20.310 "data_size": 63488 00:11:20.310 } 00:11:20.310 ] 00:11:20.310 }' 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.310 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.569 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:20.569 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.570 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.570 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.570 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.570 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:20.570 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:20.570 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.570 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.570 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.570 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.570 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7eaaa1b8-64bc-4cca-adc2-7870f93f49d4 00:11:20.570 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.570 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.570 [2024-11-20 11:21:03.681854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:20.570 [2024-11-20 11:21:03.682078] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:20.570 [2024-11-20 11:21:03.682092] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:20.570 NewBaseBdev 00:11:20.570 [2024-11-20 11:21:03.682339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:20.570 [2024-11-20 11:21:03.682571] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:20.570 [2024-11-20 11:21:03.682589] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:20.830 [2024-11-20 11:21:03.682742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.830 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.830 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:20.830 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:20.830 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.830 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:20.830 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.830 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.830 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.830 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.830 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.830 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.830 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:20.830 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.830 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.830 [ 00:11:20.830 { 00:11:20.830 "name": "NewBaseBdev", 00:11:20.830 "aliases": [ 00:11:20.830 "7eaaa1b8-64bc-4cca-adc2-7870f93f49d4" 00:11:20.830 ], 00:11:20.830 "product_name": "Malloc disk", 00:11:20.830 "block_size": 512, 00:11:20.830 "num_blocks": 65536, 00:11:20.830 "uuid": "7eaaa1b8-64bc-4cca-adc2-7870f93f49d4", 00:11:20.830 "assigned_rate_limits": { 00:11:20.830 "rw_ios_per_sec": 0, 00:11:20.830 "rw_mbytes_per_sec": 0, 00:11:20.830 "r_mbytes_per_sec": 0, 00:11:20.830 "w_mbytes_per_sec": 0 00:11:20.830 }, 00:11:20.830 "claimed": true, 00:11:20.830 "claim_type": "exclusive_write", 00:11:20.830 "zoned": false, 00:11:20.830 "supported_io_types": { 00:11:20.830 "read": true, 00:11:20.830 "write": true, 00:11:20.830 "unmap": true, 00:11:20.830 "flush": true, 00:11:20.830 "reset": true, 00:11:20.830 "nvme_admin": false, 00:11:20.830 "nvme_io": false, 00:11:20.830 "nvme_io_md": false, 00:11:20.830 "write_zeroes": true, 00:11:20.830 "zcopy": true, 00:11:20.830 "get_zone_info": false, 00:11:20.830 "zone_management": false, 00:11:20.830 "zone_append": false, 00:11:20.830 "compare": false, 00:11:20.830 "compare_and_write": false, 00:11:20.830 "abort": true, 00:11:20.830 "seek_hole": false, 00:11:20.830 "seek_data": false, 00:11:20.830 "copy": true, 00:11:20.830 "nvme_iov_md": false 00:11:20.830 }, 00:11:20.830 "memory_domains": [ 00:11:20.830 { 00:11:20.830 "dma_device_id": "system", 00:11:20.830 "dma_device_type": 1 00:11:20.830 }, 00:11:20.830 { 00:11:20.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.830 "dma_device_type": 2 00:11:20.830 } 00:11:20.830 ], 00:11:20.830 "driver_specific": {} 00:11:20.830 } 00:11:20.830 ] 00:11:20.830 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.830 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:20.830 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:20.830 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.830 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.830 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.830 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.831 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:20.831 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.831 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.831 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.831 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.831 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.831 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.831 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.831 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.831 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.831 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.831 "name": "Existed_Raid", 00:11:20.831 "uuid": "eb45f58f-690c-456b-96f6-56c332414a9c", 00:11:20.831 "strip_size_kb": 0, 00:11:20.831 "state": "online", 00:11:20.831 "raid_level": "raid1", 00:11:20.831 "superblock": true, 00:11:20.831 "num_base_bdevs": 3, 00:11:20.831 "num_base_bdevs_discovered": 3, 00:11:20.831 "num_base_bdevs_operational": 3, 00:11:20.831 "base_bdevs_list": [ 00:11:20.831 { 00:11:20.831 "name": "NewBaseBdev", 00:11:20.831 "uuid": "7eaaa1b8-64bc-4cca-adc2-7870f93f49d4", 00:11:20.831 "is_configured": true, 00:11:20.831 "data_offset": 2048, 00:11:20.831 "data_size": 63488 00:11:20.831 }, 00:11:20.831 { 00:11:20.831 "name": "BaseBdev2", 00:11:20.831 "uuid": "47f35377-3608-4aa4-b4a9-2ac2ed0df8ff", 00:11:20.831 "is_configured": true, 00:11:20.831 "data_offset": 2048, 00:11:20.831 "data_size": 63488 00:11:20.831 }, 00:11:20.831 { 00:11:20.831 "name": "BaseBdev3", 00:11:20.831 "uuid": "4042a983-ab7d-4071-81a8-63b735276356", 00:11:20.831 "is_configured": true, 00:11:20.831 "data_offset": 2048, 00:11:20.831 "data_size": 63488 00:11:20.831 } 00:11:20.831 ] 00:11:20.831 }' 00:11:20.831 11:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.831 11:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.128 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:21.128 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:21.128 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:21.128 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:21.128 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:21.128 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:21.128 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:21.128 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:21.128 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.128 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.128 [2024-11-20 11:21:04.161490] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.128 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.128 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:21.128 "name": "Existed_Raid", 00:11:21.128 "aliases": [ 00:11:21.128 "eb45f58f-690c-456b-96f6-56c332414a9c" 00:11:21.128 ], 00:11:21.128 "product_name": "Raid Volume", 00:11:21.128 "block_size": 512, 00:11:21.128 "num_blocks": 63488, 00:11:21.128 "uuid": "eb45f58f-690c-456b-96f6-56c332414a9c", 00:11:21.128 "assigned_rate_limits": { 00:11:21.128 "rw_ios_per_sec": 0, 00:11:21.128 "rw_mbytes_per_sec": 0, 00:11:21.128 "r_mbytes_per_sec": 0, 00:11:21.128 "w_mbytes_per_sec": 0 00:11:21.128 }, 00:11:21.128 "claimed": false, 00:11:21.128 "zoned": false, 00:11:21.128 "supported_io_types": { 00:11:21.128 "read": true, 00:11:21.128 "write": true, 00:11:21.128 "unmap": false, 00:11:21.128 "flush": false, 00:11:21.128 "reset": true, 00:11:21.128 "nvme_admin": false, 00:11:21.128 "nvme_io": false, 00:11:21.128 "nvme_io_md": false, 00:11:21.128 "write_zeroes": true, 00:11:21.128 "zcopy": false, 00:11:21.128 "get_zone_info": false, 00:11:21.128 "zone_management": false, 00:11:21.128 "zone_append": false, 00:11:21.128 "compare": false, 00:11:21.128 "compare_and_write": false, 00:11:21.128 "abort": false, 00:11:21.128 "seek_hole": false, 00:11:21.128 "seek_data": false, 00:11:21.128 "copy": false, 00:11:21.128 "nvme_iov_md": false 00:11:21.128 }, 00:11:21.128 "memory_domains": [ 00:11:21.128 { 00:11:21.128 "dma_device_id": "system", 00:11:21.128 "dma_device_type": 1 00:11:21.128 }, 00:11:21.128 { 00:11:21.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.128 "dma_device_type": 2 00:11:21.128 }, 00:11:21.128 { 00:11:21.128 "dma_device_id": "system", 00:11:21.128 "dma_device_type": 1 00:11:21.128 }, 00:11:21.128 { 00:11:21.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.128 "dma_device_type": 2 00:11:21.128 }, 00:11:21.128 { 00:11:21.128 "dma_device_id": "system", 00:11:21.128 "dma_device_type": 1 00:11:21.128 }, 00:11:21.128 { 00:11:21.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.128 "dma_device_type": 2 00:11:21.128 } 00:11:21.128 ], 00:11:21.128 "driver_specific": { 00:11:21.128 "raid": { 00:11:21.128 "uuid": "eb45f58f-690c-456b-96f6-56c332414a9c", 00:11:21.128 "strip_size_kb": 0, 00:11:21.128 "state": "online", 00:11:21.128 "raid_level": "raid1", 00:11:21.128 "superblock": true, 00:11:21.128 "num_base_bdevs": 3, 00:11:21.128 "num_base_bdevs_discovered": 3, 00:11:21.128 "num_base_bdevs_operational": 3, 00:11:21.128 "base_bdevs_list": [ 00:11:21.128 { 00:11:21.128 "name": "NewBaseBdev", 00:11:21.128 "uuid": "7eaaa1b8-64bc-4cca-adc2-7870f93f49d4", 00:11:21.128 "is_configured": true, 00:11:21.128 "data_offset": 2048, 00:11:21.129 "data_size": 63488 00:11:21.129 }, 00:11:21.129 { 00:11:21.129 "name": "BaseBdev2", 00:11:21.129 "uuid": "47f35377-3608-4aa4-b4a9-2ac2ed0df8ff", 00:11:21.129 "is_configured": true, 00:11:21.129 "data_offset": 2048, 00:11:21.129 "data_size": 63488 00:11:21.129 }, 00:11:21.129 { 00:11:21.129 "name": "BaseBdev3", 00:11:21.129 "uuid": "4042a983-ab7d-4071-81a8-63b735276356", 00:11:21.129 "is_configured": true, 00:11:21.129 "data_offset": 2048, 00:11:21.129 "data_size": 63488 00:11:21.129 } 00:11:21.129 ] 00:11:21.129 } 00:11:21.129 } 00:11:21.129 }' 00:11:21.129 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:21.400 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:21.400 BaseBdev2 00:11:21.400 BaseBdev3' 00:11:21.400 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.400 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:21.400 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.400 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:21.400 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.400 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.400 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.400 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.400 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.401 [2024-11-20 11:21:04.432673] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:21.401 [2024-11-20 11:21:04.432760] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:21.401 [2024-11-20 11:21:04.432874] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:21.401 [2024-11-20 11:21:04.433190] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:21.401 [2024-11-20 11:21:04.433202] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68157 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68157 ']' 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68157 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68157 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68157' 00:11:21.401 killing process with pid 68157 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68157 00:11:21.401 [2024-11-20 11:21:04.474126] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:21.401 11:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68157 00:11:21.970 [2024-11-20 11:21:04.783150] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:22.909 11:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:22.909 00:11:22.910 real 0m10.687s 00:11:22.910 user 0m17.022s 00:11:22.910 sys 0m1.786s 00:11:22.910 11:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.910 11:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.910 ************************************ 00:11:22.910 END TEST raid_state_function_test_sb 00:11:22.910 ************************************ 00:11:22.910 11:21:05 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:11:22.910 11:21:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:22.910 11:21:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.910 11:21:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:22.910 ************************************ 00:11:22.910 START TEST raid_superblock_test 00:11:22.910 ************************************ 00:11:22.910 11:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:11:22.910 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:22.910 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:22.910 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:22.910 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:22.910 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:22.910 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:22.910 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:22.910 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:22.910 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:22.910 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:22.910 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:22.910 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:22.910 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:22.910 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:22.910 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:22.910 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68777 00:11:22.910 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68777 00:11:22.910 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:22.910 11:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68777 ']' 00:11:22.910 11:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.910 11:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.910 11:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.170 11:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.170 11:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.170 [2024-11-20 11:21:06.110596] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:11:23.170 [2024-11-20 11:21:06.110816] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68777 ] 00:11:23.430 [2024-11-20 11:21:06.288213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.430 [2024-11-20 11:21:06.406788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.688 [2024-11-20 11:21:06.607194] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.688 [2024-11-20 11:21:06.607254] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.947 11:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:23.947 11:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:23.947 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:23.947 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:23.947 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:23.947 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:23.947 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:23.947 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:23.947 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:23.947 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:23.947 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:23.947 11:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.947 11:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.947 malloc1 00:11:23.947 11:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.947 11:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:23.947 11:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.947 11:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.947 [2024-11-20 11:21:07.004376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:23.947 [2024-11-20 11:21:07.004529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.947 [2024-11-20 11:21:07.004582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:23.947 [2024-11-20 11:21:07.004655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.947 [2024-11-20 11:21:07.006839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.947 [2024-11-20 11:21:07.006925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:23.947 pt1 00:11:23.947 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.947 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:23.947 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:23.947 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:23.947 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:23.947 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:23.947 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:23.947 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:23.947 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:23.947 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:23.947 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.947 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.947 malloc2 00:11:23.947 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.947 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:23.947 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.947 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.207 [2024-11-20 11:21:07.062276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:24.207 [2024-11-20 11:21:07.062337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.208 [2024-11-20 11:21:07.062360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:24.208 [2024-11-20 11:21:07.062369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.208 [2024-11-20 11:21:07.064645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.208 [2024-11-20 11:21:07.064727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:24.208 pt2 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.208 malloc3 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.208 [2024-11-20 11:21:07.132878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:24.208 [2024-11-20 11:21:07.132990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.208 [2024-11-20 11:21:07.133033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:24.208 [2024-11-20 11:21:07.133078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.208 [2024-11-20 11:21:07.135266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.208 [2024-11-20 11:21:07.135345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:24.208 pt3 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.208 [2024-11-20 11:21:07.144911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:24.208 [2024-11-20 11:21:07.146761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:24.208 [2024-11-20 11:21:07.146873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:24.208 [2024-11-20 11:21:07.147103] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:24.208 [2024-11-20 11:21:07.147162] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:24.208 [2024-11-20 11:21:07.147477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:24.208 [2024-11-20 11:21:07.147720] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:24.208 [2024-11-20 11:21:07.147773] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:24.208 [2024-11-20 11:21:07.147988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.208 "name": "raid_bdev1", 00:11:24.208 "uuid": "b67da187-b9b8-408d-9cf2-10fe2567b6b5", 00:11:24.208 "strip_size_kb": 0, 00:11:24.208 "state": "online", 00:11:24.208 "raid_level": "raid1", 00:11:24.208 "superblock": true, 00:11:24.208 "num_base_bdevs": 3, 00:11:24.208 "num_base_bdevs_discovered": 3, 00:11:24.208 "num_base_bdevs_operational": 3, 00:11:24.208 "base_bdevs_list": [ 00:11:24.208 { 00:11:24.208 "name": "pt1", 00:11:24.208 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:24.208 "is_configured": true, 00:11:24.208 "data_offset": 2048, 00:11:24.208 "data_size": 63488 00:11:24.208 }, 00:11:24.208 { 00:11:24.208 "name": "pt2", 00:11:24.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:24.208 "is_configured": true, 00:11:24.208 "data_offset": 2048, 00:11:24.208 "data_size": 63488 00:11:24.208 }, 00:11:24.208 { 00:11:24.208 "name": "pt3", 00:11:24.208 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:24.208 "is_configured": true, 00:11:24.208 "data_offset": 2048, 00:11:24.208 "data_size": 63488 00:11:24.208 } 00:11:24.208 ] 00:11:24.208 }' 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.208 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.777 [2024-11-20 11:21:07.656339] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:24.777 "name": "raid_bdev1", 00:11:24.777 "aliases": [ 00:11:24.777 "b67da187-b9b8-408d-9cf2-10fe2567b6b5" 00:11:24.777 ], 00:11:24.777 "product_name": "Raid Volume", 00:11:24.777 "block_size": 512, 00:11:24.777 "num_blocks": 63488, 00:11:24.777 "uuid": "b67da187-b9b8-408d-9cf2-10fe2567b6b5", 00:11:24.777 "assigned_rate_limits": { 00:11:24.777 "rw_ios_per_sec": 0, 00:11:24.777 "rw_mbytes_per_sec": 0, 00:11:24.777 "r_mbytes_per_sec": 0, 00:11:24.777 "w_mbytes_per_sec": 0 00:11:24.777 }, 00:11:24.777 "claimed": false, 00:11:24.777 "zoned": false, 00:11:24.777 "supported_io_types": { 00:11:24.777 "read": true, 00:11:24.777 "write": true, 00:11:24.777 "unmap": false, 00:11:24.777 "flush": false, 00:11:24.777 "reset": true, 00:11:24.777 "nvme_admin": false, 00:11:24.777 "nvme_io": false, 00:11:24.777 "nvme_io_md": false, 00:11:24.777 "write_zeroes": true, 00:11:24.777 "zcopy": false, 00:11:24.777 "get_zone_info": false, 00:11:24.777 "zone_management": false, 00:11:24.777 "zone_append": false, 00:11:24.777 "compare": false, 00:11:24.777 "compare_and_write": false, 00:11:24.777 "abort": false, 00:11:24.777 "seek_hole": false, 00:11:24.777 "seek_data": false, 00:11:24.777 "copy": false, 00:11:24.777 "nvme_iov_md": false 00:11:24.777 }, 00:11:24.777 "memory_domains": [ 00:11:24.777 { 00:11:24.777 "dma_device_id": "system", 00:11:24.777 "dma_device_type": 1 00:11:24.777 }, 00:11:24.777 { 00:11:24.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.777 "dma_device_type": 2 00:11:24.777 }, 00:11:24.777 { 00:11:24.777 "dma_device_id": "system", 00:11:24.777 "dma_device_type": 1 00:11:24.777 }, 00:11:24.777 { 00:11:24.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.777 "dma_device_type": 2 00:11:24.777 }, 00:11:24.777 { 00:11:24.777 "dma_device_id": "system", 00:11:24.777 "dma_device_type": 1 00:11:24.777 }, 00:11:24.777 { 00:11:24.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.777 "dma_device_type": 2 00:11:24.777 } 00:11:24.777 ], 00:11:24.777 "driver_specific": { 00:11:24.777 "raid": { 00:11:24.777 "uuid": "b67da187-b9b8-408d-9cf2-10fe2567b6b5", 00:11:24.777 "strip_size_kb": 0, 00:11:24.777 "state": "online", 00:11:24.777 "raid_level": "raid1", 00:11:24.777 "superblock": true, 00:11:24.777 "num_base_bdevs": 3, 00:11:24.777 "num_base_bdevs_discovered": 3, 00:11:24.777 "num_base_bdevs_operational": 3, 00:11:24.777 "base_bdevs_list": [ 00:11:24.777 { 00:11:24.777 "name": "pt1", 00:11:24.777 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:24.777 "is_configured": true, 00:11:24.777 "data_offset": 2048, 00:11:24.777 "data_size": 63488 00:11:24.777 }, 00:11:24.777 { 00:11:24.777 "name": "pt2", 00:11:24.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:24.777 "is_configured": true, 00:11:24.777 "data_offset": 2048, 00:11:24.777 "data_size": 63488 00:11:24.777 }, 00:11:24.777 { 00:11:24.777 "name": "pt3", 00:11:24.777 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:24.777 "is_configured": true, 00:11:24.777 "data_offset": 2048, 00:11:24.777 "data_size": 63488 00:11:24.777 } 00:11:24.777 ] 00:11:24.777 } 00:11:24.777 } 00:11:24.777 }' 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:24.777 pt2 00:11:24.777 pt3' 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.777 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.036 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.036 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.036 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.036 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:25.036 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:25.036 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.036 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.036 [2024-11-20 11:21:07.927895] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:25.036 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.036 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b67da187-b9b8-408d-9cf2-10fe2567b6b5 00:11:25.036 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b67da187-b9b8-408d-9cf2-10fe2567b6b5 ']' 00:11:25.036 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:25.036 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.036 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.036 [2024-11-20 11:21:07.971562] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:25.036 [2024-11-20 11:21:07.971673] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.036 [2024-11-20 11:21:07.971781] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.036 [2024-11-20 11:21:07.971871] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:25.036 [2024-11-20 11:21:07.971882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:25.036 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.036 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.036 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.036 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.036 11:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:25.036 11:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.036 [2024-11-20 11:21:08.111380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:25.036 [2024-11-20 11:21:08.113626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:25.036 [2024-11-20 11:21:08.113738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:25.036 [2024-11-20 11:21:08.113818] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:25.036 [2024-11-20 11:21:08.113954] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:25.036 [2024-11-20 11:21:08.114034] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:25.036 [2024-11-20 11:21:08.114099] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:25.036 [2024-11-20 11:21:08.114139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:25.036 request: 00:11:25.036 { 00:11:25.036 "name": "raid_bdev1", 00:11:25.036 "raid_level": "raid1", 00:11:25.036 "base_bdevs": [ 00:11:25.036 "malloc1", 00:11:25.036 "malloc2", 00:11:25.036 "malloc3" 00:11:25.036 ], 00:11:25.036 "superblock": false, 00:11:25.036 "method": "bdev_raid_create", 00:11:25.036 "req_id": 1 00:11:25.036 } 00:11:25.036 Got JSON-RPC error response 00:11:25.036 response: 00:11:25.036 { 00:11:25.036 "code": -17, 00:11:25.036 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:25.036 } 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.036 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.294 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:25.294 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:25.294 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:25.294 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.294 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.294 [2024-11-20 11:21:08.171173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:25.294 [2024-11-20 11:21:08.171295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.294 [2024-11-20 11:21:08.171342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:25.294 [2024-11-20 11:21:08.171377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.295 [2024-11-20 11:21:08.173811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.295 [2024-11-20 11:21:08.173913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:25.295 [2024-11-20 11:21:08.174039] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:25.295 [2024-11-20 11:21:08.174141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:25.295 pt1 00:11:25.295 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.295 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:25.295 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.295 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.295 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.295 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.295 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.295 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.295 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.295 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.295 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.295 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.295 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.295 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.295 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.295 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.295 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.295 "name": "raid_bdev1", 00:11:25.295 "uuid": "b67da187-b9b8-408d-9cf2-10fe2567b6b5", 00:11:25.295 "strip_size_kb": 0, 00:11:25.295 "state": "configuring", 00:11:25.295 "raid_level": "raid1", 00:11:25.295 "superblock": true, 00:11:25.295 "num_base_bdevs": 3, 00:11:25.295 "num_base_bdevs_discovered": 1, 00:11:25.295 "num_base_bdevs_operational": 3, 00:11:25.295 "base_bdevs_list": [ 00:11:25.295 { 00:11:25.295 "name": "pt1", 00:11:25.295 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:25.295 "is_configured": true, 00:11:25.295 "data_offset": 2048, 00:11:25.295 "data_size": 63488 00:11:25.295 }, 00:11:25.295 { 00:11:25.295 "name": null, 00:11:25.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:25.295 "is_configured": false, 00:11:25.295 "data_offset": 2048, 00:11:25.295 "data_size": 63488 00:11:25.295 }, 00:11:25.295 { 00:11:25.295 "name": null, 00:11:25.295 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:25.295 "is_configured": false, 00:11:25.295 "data_offset": 2048, 00:11:25.295 "data_size": 63488 00:11:25.295 } 00:11:25.295 ] 00:11:25.295 }' 00:11:25.295 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.295 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.613 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:25.613 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:25.613 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.613 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.613 [2024-11-20 11:21:08.590493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:25.613 [2024-11-20 11:21:08.590561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.613 [2024-11-20 11:21:08.590587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:25.613 [2024-11-20 11:21:08.590597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.613 [2024-11-20 11:21:08.591115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.613 [2024-11-20 11:21:08.591141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:25.613 [2024-11-20 11:21:08.591238] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:25.613 [2024-11-20 11:21:08.591261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:25.613 pt2 00:11:25.613 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.613 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:25.613 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.613 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.613 [2024-11-20 11:21:08.602484] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:25.613 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.613 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:25.613 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.613 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.613 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.613 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.613 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.613 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.613 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.613 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.613 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.613 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.613 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.613 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.614 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.614 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.614 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.614 "name": "raid_bdev1", 00:11:25.614 "uuid": "b67da187-b9b8-408d-9cf2-10fe2567b6b5", 00:11:25.614 "strip_size_kb": 0, 00:11:25.614 "state": "configuring", 00:11:25.614 "raid_level": "raid1", 00:11:25.614 "superblock": true, 00:11:25.614 "num_base_bdevs": 3, 00:11:25.614 "num_base_bdevs_discovered": 1, 00:11:25.614 "num_base_bdevs_operational": 3, 00:11:25.614 "base_bdevs_list": [ 00:11:25.614 { 00:11:25.614 "name": "pt1", 00:11:25.614 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:25.614 "is_configured": true, 00:11:25.614 "data_offset": 2048, 00:11:25.614 "data_size": 63488 00:11:25.614 }, 00:11:25.614 { 00:11:25.614 "name": null, 00:11:25.614 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:25.614 "is_configured": false, 00:11:25.614 "data_offset": 0, 00:11:25.614 "data_size": 63488 00:11:25.614 }, 00:11:25.614 { 00:11:25.614 "name": null, 00:11:25.614 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:25.614 "is_configured": false, 00:11:25.614 "data_offset": 2048, 00:11:25.614 "data_size": 63488 00:11:25.614 } 00:11:25.614 ] 00:11:25.614 }' 00:11:25.614 11:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.614 11:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.185 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:26.185 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:26.185 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:26.185 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.185 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.185 [2024-11-20 11:21:09.077625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:26.185 [2024-11-20 11:21:09.077766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.185 [2024-11-20 11:21:09.077806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:26.185 [2024-11-20 11:21:09.077841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.185 [2024-11-20 11:21:09.078366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.186 [2024-11-20 11:21:09.078436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:26.186 [2024-11-20 11:21:09.078588] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:26.186 [2024-11-20 11:21:09.078671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:26.186 pt2 00:11:26.186 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.186 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:26.186 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:26.186 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:26.186 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.186 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.186 [2024-11-20 11:21:09.089571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:26.186 [2024-11-20 11:21:09.089660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.186 [2024-11-20 11:21:09.089697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:26.186 [2024-11-20 11:21:09.089731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.186 [2024-11-20 11:21:09.090126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.186 [2024-11-20 11:21:09.090188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:26.186 [2024-11-20 11:21:09.090278] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:26.186 [2024-11-20 11:21:09.090328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:26.186 [2024-11-20 11:21:09.090490] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:26.186 [2024-11-20 11:21:09.090553] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:26.186 [2024-11-20 11:21:09.090837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:26.186 [2024-11-20 11:21:09.091063] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:26.186 [2024-11-20 11:21:09.091110] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:26.186 [2024-11-20 11:21:09.091307] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.186 pt3 00:11:26.186 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.186 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:26.186 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:26.186 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:26.186 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.186 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.186 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.186 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.186 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.186 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.186 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.186 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.186 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.187 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.187 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.187 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.187 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.187 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.187 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.187 "name": "raid_bdev1", 00:11:26.187 "uuid": "b67da187-b9b8-408d-9cf2-10fe2567b6b5", 00:11:26.187 "strip_size_kb": 0, 00:11:26.187 "state": "online", 00:11:26.187 "raid_level": "raid1", 00:11:26.187 "superblock": true, 00:11:26.187 "num_base_bdevs": 3, 00:11:26.187 "num_base_bdevs_discovered": 3, 00:11:26.187 "num_base_bdevs_operational": 3, 00:11:26.187 "base_bdevs_list": [ 00:11:26.187 { 00:11:26.187 "name": "pt1", 00:11:26.187 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:26.187 "is_configured": true, 00:11:26.187 "data_offset": 2048, 00:11:26.187 "data_size": 63488 00:11:26.187 }, 00:11:26.187 { 00:11:26.187 "name": "pt2", 00:11:26.187 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.187 "is_configured": true, 00:11:26.187 "data_offset": 2048, 00:11:26.187 "data_size": 63488 00:11:26.187 }, 00:11:26.187 { 00:11:26.187 "name": "pt3", 00:11:26.187 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:26.187 "is_configured": true, 00:11:26.187 "data_offset": 2048, 00:11:26.187 "data_size": 63488 00:11:26.187 } 00:11:26.187 ] 00:11:26.187 }' 00:11:26.187 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.187 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.446 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:26.446 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:26.446 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:26.446 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:26.446 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:26.446 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:26.446 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:26.446 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:26.446 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.446 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.446 [2024-11-20 11:21:09.537213] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.446 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.705 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:26.705 "name": "raid_bdev1", 00:11:26.705 "aliases": [ 00:11:26.705 "b67da187-b9b8-408d-9cf2-10fe2567b6b5" 00:11:26.705 ], 00:11:26.705 "product_name": "Raid Volume", 00:11:26.705 "block_size": 512, 00:11:26.705 "num_blocks": 63488, 00:11:26.705 "uuid": "b67da187-b9b8-408d-9cf2-10fe2567b6b5", 00:11:26.705 "assigned_rate_limits": { 00:11:26.705 "rw_ios_per_sec": 0, 00:11:26.705 "rw_mbytes_per_sec": 0, 00:11:26.705 "r_mbytes_per_sec": 0, 00:11:26.705 "w_mbytes_per_sec": 0 00:11:26.705 }, 00:11:26.705 "claimed": false, 00:11:26.705 "zoned": false, 00:11:26.705 "supported_io_types": { 00:11:26.705 "read": true, 00:11:26.705 "write": true, 00:11:26.705 "unmap": false, 00:11:26.705 "flush": false, 00:11:26.705 "reset": true, 00:11:26.705 "nvme_admin": false, 00:11:26.705 "nvme_io": false, 00:11:26.705 "nvme_io_md": false, 00:11:26.705 "write_zeroes": true, 00:11:26.705 "zcopy": false, 00:11:26.705 "get_zone_info": false, 00:11:26.705 "zone_management": false, 00:11:26.705 "zone_append": false, 00:11:26.705 "compare": false, 00:11:26.705 "compare_and_write": false, 00:11:26.705 "abort": false, 00:11:26.705 "seek_hole": false, 00:11:26.705 "seek_data": false, 00:11:26.705 "copy": false, 00:11:26.705 "nvme_iov_md": false 00:11:26.705 }, 00:11:26.705 "memory_domains": [ 00:11:26.705 { 00:11:26.705 "dma_device_id": "system", 00:11:26.705 "dma_device_type": 1 00:11:26.705 }, 00:11:26.705 { 00:11:26.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.705 "dma_device_type": 2 00:11:26.705 }, 00:11:26.705 { 00:11:26.705 "dma_device_id": "system", 00:11:26.705 "dma_device_type": 1 00:11:26.705 }, 00:11:26.705 { 00:11:26.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.705 "dma_device_type": 2 00:11:26.705 }, 00:11:26.705 { 00:11:26.705 "dma_device_id": "system", 00:11:26.705 "dma_device_type": 1 00:11:26.705 }, 00:11:26.705 { 00:11:26.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.705 "dma_device_type": 2 00:11:26.705 } 00:11:26.705 ], 00:11:26.705 "driver_specific": { 00:11:26.705 "raid": { 00:11:26.705 "uuid": "b67da187-b9b8-408d-9cf2-10fe2567b6b5", 00:11:26.705 "strip_size_kb": 0, 00:11:26.705 "state": "online", 00:11:26.705 "raid_level": "raid1", 00:11:26.705 "superblock": true, 00:11:26.705 "num_base_bdevs": 3, 00:11:26.705 "num_base_bdevs_discovered": 3, 00:11:26.705 "num_base_bdevs_operational": 3, 00:11:26.705 "base_bdevs_list": [ 00:11:26.705 { 00:11:26.705 "name": "pt1", 00:11:26.705 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:26.705 "is_configured": true, 00:11:26.705 "data_offset": 2048, 00:11:26.705 "data_size": 63488 00:11:26.705 }, 00:11:26.705 { 00:11:26.705 "name": "pt2", 00:11:26.705 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.705 "is_configured": true, 00:11:26.705 "data_offset": 2048, 00:11:26.705 "data_size": 63488 00:11:26.705 }, 00:11:26.705 { 00:11:26.705 "name": "pt3", 00:11:26.705 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:26.705 "is_configured": true, 00:11:26.705 "data_offset": 2048, 00:11:26.705 "data_size": 63488 00:11:26.706 } 00:11:26.706 ] 00:11:26.706 } 00:11:26.706 } 00:11:26.706 }' 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:26.706 pt2 00:11:26.706 pt3' 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.706 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.965 [2024-11-20 11:21:09.852686] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b67da187-b9b8-408d-9cf2-10fe2567b6b5 '!=' b67da187-b9b8-408d-9cf2-10fe2567b6b5 ']' 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.965 [2024-11-20 11:21:09.896360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.965 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.966 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.966 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.966 "name": "raid_bdev1", 00:11:26.966 "uuid": "b67da187-b9b8-408d-9cf2-10fe2567b6b5", 00:11:26.966 "strip_size_kb": 0, 00:11:26.966 "state": "online", 00:11:26.966 "raid_level": "raid1", 00:11:26.966 "superblock": true, 00:11:26.966 "num_base_bdevs": 3, 00:11:26.966 "num_base_bdevs_discovered": 2, 00:11:26.966 "num_base_bdevs_operational": 2, 00:11:26.966 "base_bdevs_list": [ 00:11:26.966 { 00:11:26.966 "name": null, 00:11:26.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.966 "is_configured": false, 00:11:26.966 "data_offset": 0, 00:11:26.966 "data_size": 63488 00:11:26.966 }, 00:11:26.966 { 00:11:26.966 "name": "pt2", 00:11:26.966 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.966 "is_configured": true, 00:11:26.966 "data_offset": 2048, 00:11:26.966 "data_size": 63488 00:11:26.966 }, 00:11:26.966 { 00:11:26.966 "name": "pt3", 00:11:26.966 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:26.966 "is_configured": true, 00:11:26.966 "data_offset": 2048, 00:11:26.966 "data_size": 63488 00:11:26.966 } 00:11:26.966 ] 00:11:26.966 }' 00:11:26.966 11:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.966 11:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.225 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:27.225 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.225 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.485 [2024-11-20 11:21:10.339727] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:27.485 [2024-11-20 11:21:10.339767] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:27.485 [2024-11-20 11:21:10.339884] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:27.485 [2024-11-20 11:21:10.339976] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:27.485 [2024-11-20 11:21:10.340002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.485 [2024-11-20 11:21:10.431673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:27.485 [2024-11-20 11:21:10.431751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.485 [2024-11-20 11:21:10.431773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:27.485 [2024-11-20 11:21:10.431785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.485 [2024-11-20 11:21:10.434177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.485 [2024-11-20 11:21:10.434223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:27.485 [2024-11-20 11:21:10.434323] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:27.485 [2024-11-20 11:21:10.434380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:27.485 pt2 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.485 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.486 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.486 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.486 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.486 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.486 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.486 "name": "raid_bdev1", 00:11:27.486 "uuid": "b67da187-b9b8-408d-9cf2-10fe2567b6b5", 00:11:27.486 "strip_size_kb": 0, 00:11:27.486 "state": "configuring", 00:11:27.486 "raid_level": "raid1", 00:11:27.486 "superblock": true, 00:11:27.486 "num_base_bdevs": 3, 00:11:27.486 "num_base_bdevs_discovered": 1, 00:11:27.486 "num_base_bdevs_operational": 2, 00:11:27.486 "base_bdevs_list": [ 00:11:27.486 { 00:11:27.486 "name": null, 00:11:27.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.486 "is_configured": false, 00:11:27.486 "data_offset": 2048, 00:11:27.486 "data_size": 63488 00:11:27.486 }, 00:11:27.486 { 00:11:27.486 "name": "pt2", 00:11:27.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.486 "is_configured": true, 00:11:27.486 "data_offset": 2048, 00:11:27.486 "data_size": 63488 00:11:27.486 }, 00:11:27.486 { 00:11:27.486 "name": null, 00:11:27.486 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:27.486 "is_configured": false, 00:11:27.486 "data_offset": 2048, 00:11:27.486 "data_size": 63488 00:11:27.486 } 00:11:27.486 ] 00:11:27.486 }' 00:11:27.486 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.486 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.054 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:28.054 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:28.054 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:11:28.054 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:28.054 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.054 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.054 [2024-11-20 11:21:10.910921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:28.054 [2024-11-20 11:21:10.910996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.054 [2024-11-20 11:21:10.911019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:28.054 [2024-11-20 11:21:10.911031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.054 [2024-11-20 11:21:10.911505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.054 [2024-11-20 11:21:10.911560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:28.054 [2024-11-20 11:21:10.911669] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:28.054 [2024-11-20 11:21:10.911699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:28.054 [2024-11-20 11:21:10.911843] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:28.054 [2024-11-20 11:21:10.911866] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:28.054 [2024-11-20 11:21:10.912148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:28.054 [2024-11-20 11:21:10.912319] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:28.054 [2024-11-20 11:21:10.912334] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:28.054 [2024-11-20 11:21:10.912507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.054 pt3 00:11:28.054 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.055 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:28.055 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.055 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.055 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.055 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.055 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:28.055 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.055 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.055 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.055 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.055 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.055 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.055 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.055 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.055 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.055 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.055 "name": "raid_bdev1", 00:11:28.055 "uuid": "b67da187-b9b8-408d-9cf2-10fe2567b6b5", 00:11:28.055 "strip_size_kb": 0, 00:11:28.055 "state": "online", 00:11:28.055 "raid_level": "raid1", 00:11:28.055 "superblock": true, 00:11:28.055 "num_base_bdevs": 3, 00:11:28.055 "num_base_bdevs_discovered": 2, 00:11:28.055 "num_base_bdevs_operational": 2, 00:11:28.055 "base_bdevs_list": [ 00:11:28.055 { 00:11:28.055 "name": null, 00:11:28.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.055 "is_configured": false, 00:11:28.055 "data_offset": 2048, 00:11:28.055 "data_size": 63488 00:11:28.055 }, 00:11:28.055 { 00:11:28.055 "name": "pt2", 00:11:28.055 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:28.055 "is_configured": true, 00:11:28.055 "data_offset": 2048, 00:11:28.055 "data_size": 63488 00:11:28.055 }, 00:11:28.055 { 00:11:28.055 "name": "pt3", 00:11:28.055 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:28.055 "is_configured": true, 00:11:28.055 "data_offset": 2048, 00:11:28.055 "data_size": 63488 00:11:28.055 } 00:11:28.055 ] 00:11:28.055 }' 00:11:28.055 11:21:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.055 11:21:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.315 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:28.315 11:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.315 11:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.315 [2024-11-20 11:21:11.394078] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:28.315 [2024-11-20 11:21:11.394124] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:28.315 [2024-11-20 11:21:11.394219] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.315 [2024-11-20 11:21:11.394292] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.315 [2024-11-20 11:21:11.394303] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:28.315 11:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.315 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.315 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:28.315 11:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.315 11:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.315 11:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.575 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:28.575 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:28.575 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:11:28.575 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:11:28.575 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:11:28.575 11:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.575 11:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.575 11:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.575 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:28.575 11:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.575 11:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.575 [2024-11-20 11:21:11.465965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:28.575 [2024-11-20 11:21:11.466032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.575 [2024-11-20 11:21:11.466057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:28.575 [2024-11-20 11:21:11.466068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.575 [2024-11-20 11:21:11.468524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.575 [2024-11-20 11:21:11.468566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:28.576 [2024-11-20 11:21:11.468660] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:28.576 [2024-11-20 11:21:11.468737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:28.576 [2024-11-20 11:21:11.468893] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:28.576 [2024-11-20 11:21:11.468906] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:28.576 [2024-11-20 11:21:11.468924] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:28.576 [2024-11-20 11:21:11.469008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:28.576 pt1 00:11:28.576 11:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.576 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:11:28.576 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:28.576 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.576 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.576 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.576 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.576 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:28.576 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.576 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.576 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.576 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.576 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.576 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.576 11:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.576 11:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.576 11:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.576 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.576 "name": "raid_bdev1", 00:11:28.576 "uuid": "b67da187-b9b8-408d-9cf2-10fe2567b6b5", 00:11:28.576 "strip_size_kb": 0, 00:11:28.576 "state": "configuring", 00:11:28.576 "raid_level": "raid1", 00:11:28.576 "superblock": true, 00:11:28.576 "num_base_bdevs": 3, 00:11:28.576 "num_base_bdevs_discovered": 1, 00:11:28.576 "num_base_bdevs_operational": 2, 00:11:28.576 "base_bdevs_list": [ 00:11:28.576 { 00:11:28.576 "name": null, 00:11:28.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.576 "is_configured": false, 00:11:28.576 "data_offset": 2048, 00:11:28.576 "data_size": 63488 00:11:28.576 }, 00:11:28.576 { 00:11:28.576 "name": "pt2", 00:11:28.576 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:28.576 "is_configured": true, 00:11:28.576 "data_offset": 2048, 00:11:28.576 "data_size": 63488 00:11:28.576 }, 00:11:28.576 { 00:11:28.576 "name": null, 00:11:28.576 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:28.576 "is_configured": false, 00:11:28.576 "data_offset": 2048, 00:11:28.576 "data_size": 63488 00:11:28.576 } 00:11:28.576 ] 00:11:28.576 }' 00:11:28.576 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.576 11:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.835 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:28.835 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:28.835 11:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.835 11:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.835 11:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.094 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:29.094 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:29.094 11:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.094 11:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.094 [2024-11-20 11:21:11.977105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:29.094 [2024-11-20 11:21:11.977179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.094 [2024-11-20 11:21:11.977201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:29.094 [2024-11-20 11:21:11.977210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.094 [2024-11-20 11:21:11.977708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.094 [2024-11-20 11:21:11.977737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:29.094 [2024-11-20 11:21:11.977827] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:29.094 [2024-11-20 11:21:11.977880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:29.094 [2024-11-20 11:21:11.978023] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:29.094 [2024-11-20 11:21:11.978037] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:29.094 [2024-11-20 11:21:11.978311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:29.094 [2024-11-20 11:21:11.978513] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:29.094 [2024-11-20 11:21:11.978536] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:29.094 [2024-11-20 11:21:11.978707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.094 pt3 00:11:29.094 11:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.094 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:29.094 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.094 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.094 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.094 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.094 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:29.095 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.095 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.095 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.095 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.095 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.095 11:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.095 11:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.095 11:21:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.095 11:21:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.095 11:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.095 "name": "raid_bdev1", 00:11:29.095 "uuid": "b67da187-b9b8-408d-9cf2-10fe2567b6b5", 00:11:29.095 "strip_size_kb": 0, 00:11:29.095 "state": "online", 00:11:29.095 "raid_level": "raid1", 00:11:29.095 "superblock": true, 00:11:29.095 "num_base_bdevs": 3, 00:11:29.095 "num_base_bdevs_discovered": 2, 00:11:29.095 "num_base_bdevs_operational": 2, 00:11:29.095 "base_bdevs_list": [ 00:11:29.095 { 00:11:29.095 "name": null, 00:11:29.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.095 "is_configured": false, 00:11:29.095 "data_offset": 2048, 00:11:29.095 "data_size": 63488 00:11:29.095 }, 00:11:29.095 { 00:11:29.095 "name": "pt2", 00:11:29.095 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:29.095 "is_configured": true, 00:11:29.095 "data_offset": 2048, 00:11:29.095 "data_size": 63488 00:11:29.095 }, 00:11:29.095 { 00:11:29.095 "name": "pt3", 00:11:29.095 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:29.095 "is_configured": true, 00:11:29.095 "data_offset": 2048, 00:11:29.095 "data_size": 63488 00:11:29.095 } 00:11:29.095 ] 00:11:29.095 }' 00:11:29.095 11:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.095 11:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.354 11:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:29.354 11:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.354 11:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.354 11:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:29.354 11:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.354 11:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:29.354 11:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:29.354 11:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.354 11:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.354 11:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:29.354 [2024-11-20 11:21:12.456608] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.614 11:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.614 11:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' b67da187-b9b8-408d-9cf2-10fe2567b6b5 '!=' b67da187-b9b8-408d-9cf2-10fe2567b6b5 ']' 00:11:29.614 11:21:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68777 00:11:29.615 11:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68777 ']' 00:11:29.615 11:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68777 00:11:29.615 11:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:29.615 11:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:29.615 11:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68777 00:11:29.615 killing process with pid 68777 00:11:29.615 11:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:29.615 11:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:29.615 11:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68777' 00:11:29.615 11:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68777 00:11:29.615 [2024-11-20 11:21:12.538726] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.615 [2024-11-20 11:21:12.538841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.615 [2024-11-20 11:21:12.538903] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 11:21:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68777 00:11:29.615 ee all in destruct 00:11:29.615 [2024-11-20 11:21:12.538916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:29.874 [2024-11-20 11:21:12.843583] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:31.319 11:21:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:31.319 00:11:31.319 real 0m7.968s 00:11:31.319 user 0m12.560s 00:11:31.319 sys 0m1.387s 00:11:31.319 11:21:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.319 11:21:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.319 ************************************ 00:11:31.319 END TEST raid_superblock_test 00:11:31.319 ************************************ 00:11:31.319 11:21:14 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:11:31.319 11:21:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:31.319 11:21:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.319 11:21:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:31.319 ************************************ 00:11:31.319 START TEST raid_read_error_test 00:11:31.319 ************************************ 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Ix8LldbuNI 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69223 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69223 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69223 ']' 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:31.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:31.319 11:21:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.319 [2024-11-20 11:21:14.146890] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:11:31.319 [2024-11-20 11:21:14.147013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69223 ] 00:11:31.319 [2024-11-20 11:21:14.299210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.319 [2024-11-20 11:21:14.419789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.579 [2024-11-20 11:21:14.636484] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.579 [2024-11-20 11:21:14.636537] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.148 BaseBdev1_malloc 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.148 true 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.148 [2024-11-20 11:21:15.077720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:32.148 [2024-11-20 11:21:15.077794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.148 [2024-11-20 11:21:15.077816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:32.148 [2024-11-20 11:21:15.077827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.148 [2024-11-20 11:21:15.079958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.148 [2024-11-20 11:21:15.080003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:32.148 BaseBdev1 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.148 BaseBdev2_malloc 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.148 true 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.148 [2024-11-20 11:21:15.148780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:32.148 [2024-11-20 11:21:15.148851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.148 [2024-11-20 11:21:15.148874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:32.148 [2024-11-20 11:21:15.148886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.148 [2024-11-20 11:21:15.151211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.148 [2024-11-20 11:21:15.151257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:32.148 BaseBdev2 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.148 BaseBdev3_malloc 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.148 true 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.148 [2024-11-20 11:21:15.230081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:32.148 [2024-11-20 11:21:15.230163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.148 [2024-11-20 11:21:15.230194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:32.148 [2024-11-20 11:21:15.230213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.148 [2024-11-20 11:21:15.233210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.148 [2024-11-20 11:21:15.233277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:32.148 BaseBdev3 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.148 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.148 [2024-11-20 11:21:15.242332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:32.148 [2024-11-20 11:21:15.245008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:32.148 [2024-11-20 11:21:15.245132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:32.148 [2024-11-20 11:21:15.245490] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:32.148 [2024-11-20 11:21:15.245525] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:32.148 [2024-11-20 11:21:15.245914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:32.149 [2024-11-20 11:21:15.246194] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:32.149 [2024-11-20 11:21:15.246231] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:32.149 [2024-11-20 11:21:15.246540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.149 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.149 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:32.149 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.149 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.149 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.149 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.149 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.149 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.149 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.149 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.149 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.149 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.149 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.149 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.149 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.409 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.409 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.409 "name": "raid_bdev1", 00:11:32.409 "uuid": "934de2d9-a64d-4a94-8b74-b7703582c0b4", 00:11:32.409 "strip_size_kb": 0, 00:11:32.409 "state": "online", 00:11:32.409 "raid_level": "raid1", 00:11:32.409 "superblock": true, 00:11:32.409 "num_base_bdevs": 3, 00:11:32.409 "num_base_bdevs_discovered": 3, 00:11:32.409 "num_base_bdevs_operational": 3, 00:11:32.409 "base_bdevs_list": [ 00:11:32.409 { 00:11:32.409 "name": "BaseBdev1", 00:11:32.409 "uuid": "a3572604-354d-554c-8e31-4c26104a25ce", 00:11:32.409 "is_configured": true, 00:11:32.409 "data_offset": 2048, 00:11:32.409 "data_size": 63488 00:11:32.409 }, 00:11:32.409 { 00:11:32.409 "name": "BaseBdev2", 00:11:32.409 "uuid": "af81b3d0-baea-5b64-b451-a0b2331ff479", 00:11:32.409 "is_configured": true, 00:11:32.409 "data_offset": 2048, 00:11:32.409 "data_size": 63488 00:11:32.409 }, 00:11:32.409 { 00:11:32.409 "name": "BaseBdev3", 00:11:32.409 "uuid": "1b7342ce-3c21-5478-9a59-8449d531a800", 00:11:32.409 "is_configured": true, 00:11:32.409 "data_offset": 2048, 00:11:32.409 "data_size": 63488 00:11:32.409 } 00:11:32.409 ] 00:11:32.409 }' 00:11:32.409 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.409 11:21:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.669 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:32.669 11:21:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:32.929 [2024-11-20 11:21:15.823033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.868 "name": "raid_bdev1", 00:11:33.868 "uuid": "934de2d9-a64d-4a94-8b74-b7703582c0b4", 00:11:33.868 "strip_size_kb": 0, 00:11:33.868 "state": "online", 00:11:33.868 "raid_level": "raid1", 00:11:33.868 "superblock": true, 00:11:33.868 "num_base_bdevs": 3, 00:11:33.868 "num_base_bdevs_discovered": 3, 00:11:33.868 "num_base_bdevs_operational": 3, 00:11:33.868 "base_bdevs_list": [ 00:11:33.868 { 00:11:33.868 "name": "BaseBdev1", 00:11:33.868 "uuid": "a3572604-354d-554c-8e31-4c26104a25ce", 00:11:33.868 "is_configured": true, 00:11:33.868 "data_offset": 2048, 00:11:33.868 "data_size": 63488 00:11:33.868 }, 00:11:33.868 { 00:11:33.868 "name": "BaseBdev2", 00:11:33.868 "uuid": "af81b3d0-baea-5b64-b451-a0b2331ff479", 00:11:33.868 "is_configured": true, 00:11:33.868 "data_offset": 2048, 00:11:33.868 "data_size": 63488 00:11:33.868 }, 00:11:33.868 { 00:11:33.868 "name": "BaseBdev3", 00:11:33.868 "uuid": "1b7342ce-3c21-5478-9a59-8449d531a800", 00:11:33.868 "is_configured": true, 00:11:33.868 "data_offset": 2048, 00:11:33.868 "data_size": 63488 00:11:33.868 } 00:11:33.868 ] 00:11:33.868 }' 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.868 11:21:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.128 11:21:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:34.128 11:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.128 11:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.128 [2024-11-20 11:21:17.198957] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:34.128 [2024-11-20 11:21:17.199000] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:34.128 [2024-11-20 11:21:17.201965] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:34.128 [2024-11-20 11:21:17.202038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.128 [2024-11-20 11:21:17.202142] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:34.129 [2024-11-20 11:21:17.202152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:34.129 { 00:11:34.129 "results": [ 00:11:34.129 { 00:11:34.129 "job": "raid_bdev1", 00:11:34.129 "core_mask": "0x1", 00:11:34.129 "workload": "randrw", 00:11:34.129 "percentage": 50, 00:11:34.129 "status": "finished", 00:11:34.129 "queue_depth": 1, 00:11:34.129 "io_size": 131072, 00:11:34.129 "runtime": 1.37667, 00:11:34.129 "iops": 12875.271488446759, 00:11:34.129 "mibps": 1609.4089360558448, 00:11:34.129 "io_failed": 0, 00:11:34.129 "io_timeout": 0, 00:11:34.129 "avg_latency_us": 74.96359130579387, 00:11:34.129 "min_latency_us": 23.923144104803495, 00:11:34.129 "max_latency_us": 1516.7720524017468 00:11:34.129 } 00:11:34.129 ], 00:11:34.129 "core_count": 1 00:11:34.129 } 00:11:34.129 11:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.129 11:21:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69223 00:11:34.129 11:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69223 ']' 00:11:34.129 11:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69223 00:11:34.129 11:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:34.129 11:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:34.129 11:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69223 00:11:34.129 11:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:34.129 11:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:34.129 killing process with pid 69223 00:11:34.129 11:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69223' 00:11:34.129 11:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69223 00:11:34.129 [2024-11-20 11:21:17.240303] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:34.129 11:21:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69223 00:11:34.389 [2024-11-20 11:21:17.473636] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:35.771 11:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Ix8LldbuNI 00:11:35.771 11:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:35.771 11:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:35.771 11:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:35.771 11:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:35.771 11:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:35.771 11:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:35.771 11:21:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:35.771 00:11:35.771 real 0m4.619s 00:11:35.771 user 0m5.565s 00:11:35.771 sys 0m0.548s 00:11:35.771 11:21:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.771 11:21:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.771 ************************************ 00:11:35.771 END TEST raid_read_error_test 00:11:35.771 ************************************ 00:11:35.771 11:21:18 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:11:35.771 11:21:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:35.771 11:21:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.771 11:21:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:35.771 ************************************ 00:11:35.771 START TEST raid_write_error_test 00:11:35.771 ************************************ 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xitDsZE2W0 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69370 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69370 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69370 ']' 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.771 11:21:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.771 [2024-11-20 11:21:18.841501] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:11:35.771 [2024-11-20 11:21:18.841628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69370 ] 00:11:36.047 [2024-11-20 11:21:18.996829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.047 [2024-11-20 11:21:19.112831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.351 [2024-11-20 11:21:19.310208] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.351 [2024-11-20 11:21:19.310278] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.610 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.610 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:36.610 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.610 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:36.610 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.610 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.871 BaseBdev1_malloc 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.871 true 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.871 [2024-11-20 11:21:19.752609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:36.871 [2024-11-20 11:21:19.752667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.871 [2024-11-20 11:21:19.752688] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:36.871 [2024-11-20 11:21:19.752699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.871 [2024-11-20 11:21:19.754849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.871 [2024-11-20 11:21:19.754889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:36.871 BaseBdev1 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.871 BaseBdev2_malloc 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.871 true 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.871 [2024-11-20 11:21:19.819613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:36.871 [2024-11-20 11:21:19.819670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.871 [2024-11-20 11:21:19.819688] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:36.871 [2024-11-20 11:21:19.819700] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.871 [2024-11-20 11:21:19.821982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.871 [2024-11-20 11:21:19.822026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:36.871 BaseBdev2 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.871 BaseBdev3_malloc 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.871 true 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.871 [2024-11-20 11:21:19.903602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:36.871 [2024-11-20 11:21:19.903662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.871 [2024-11-20 11:21:19.903683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:36.871 [2024-11-20 11:21:19.903695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.871 [2024-11-20 11:21:19.906026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.871 [2024-11-20 11:21:19.906072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:36.871 BaseBdev3 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.871 [2024-11-20 11:21:19.915666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.871 [2024-11-20 11:21:19.917748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:36.871 [2024-11-20 11:21:19.917837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:36.871 [2024-11-20 11:21:19.918061] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:36.871 [2024-11-20 11:21:19.918094] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:36.871 [2024-11-20 11:21:19.918422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:36.871 [2024-11-20 11:21:19.918654] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:36.871 [2024-11-20 11:21:19.918677] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:36.871 [2024-11-20 11:21:19.918856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.871 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.872 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.872 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.872 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.872 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.872 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.872 "name": "raid_bdev1", 00:11:36.872 "uuid": "17bc16f0-d340-46f7-80c2-81effd385753", 00:11:36.872 "strip_size_kb": 0, 00:11:36.872 "state": "online", 00:11:36.872 "raid_level": "raid1", 00:11:36.872 "superblock": true, 00:11:36.872 "num_base_bdevs": 3, 00:11:36.872 "num_base_bdevs_discovered": 3, 00:11:36.872 "num_base_bdevs_operational": 3, 00:11:36.872 "base_bdevs_list": [ 00:11:36.872 { 00:11:36.872 "name": "BaseBdev1", 00:11:36.872 "uuid": "48752d2c-59ab-5fce-a9dd-9865fecc29b4", 00:11:36.872 "is_configured": true, 00:11:36.872 "data_offset": 2048, 00:11:36.872 "data_size": 63488 00:11:36.872 }, 00:11:36.872 { 00:11:36.872 "name": "BaseBdev2", 00:11:36.872 "uuid": "da6302a1-ca93-5da4-952e-ae8af505811c", 00:11:36.872 "is_configured": true, 00:11:36.872 "data_offset": 2048, 00:11:36.872 "data_size": 63488 00:11:36.872 }, 00:11:36.872 { 00:11:36.872 "name": "BaseBdev3", 00:11:36.872 "uuid": "eb997d31-9dd1-5473-a0a9-86899b8de4e7", 00:11:36.872 "is_configured": true, 00:11:36.872 "data_offset": 2048, 00:11:36.872 "data_size": 63488 00:11:36.872 } 00:11:36.872 ] 00:11:36.872 }' 00:11:36.872 11:21:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.872 11:21:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.441 11:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:37.441 11:21:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:37.441 [2024-11-20 11:21:20.480437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.380 [2024-11-20 11:21:21.377363] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:38.380 [2024-11-20 11:21:21.377423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:38.380 [2024-11-20 11:21:21.377671] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.380 "name": "raid_bdev1", 00:11:38.380 "uuid": "17bc16f0-d340-46f7-80c2-81effd385753", 00:11:38.380 "strip_size_kb": 0, 00:11:38.380 "state": "online", 00:11:38.380 "raid_level": "raid1", 00:11:38.380 "superblock": true, 00:11:38.380 "num_base_bdevs": 3, 00:11:38.380 "num_base_bdevs_discovered": 2, 00:11:38.380 "num_base_bdevs_operational": 2, 00:11:38.380 "base_bdevs_list": [ 00:11:38.380 { 00:11:38.380 "name": null, 00:11:38.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.380 "is_configured": false, 00:11:38.380 "data_offset": 0, 00:11:38.380 "data_size": 63488 00:11:38.380 }, 00:11:38.380 { 00:11:38.380 "name": "BaseBdev2", 00:11:38.380 "uuid": "da6302a1-ca93-5da4-952e-ae8af505811c", 00:11:38.380 "is_configured": true, 00:11:38.380 "data_offset": 2048, 00:11:38.380 "data_size": 63488 00:11:38.380 }, 00:11:38.380 { 00:11:38.380 "name": "BaseBdev3", 00:11:38.380 "uuid": "eb997d31-9dd1-5473-a0a9-86899b8de4e7", 00:11:38.380 "is_configured": true, 00:11:38.380 "data_offset": 2048, 00:11:38.380 "data_size": 63488 00:11:38.380 } 00:11:38.380 ] 00:11:38.380 }' 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.380 11:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.950 11:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:38.950 11:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.950 11:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.950 [2024-11-20 11:21:21.877819] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:38.950 [2024-11-20 11:21:21.877866] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:38.950 [2024-11-20 11:21:21.881112] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.950 [2024-11-20 11:21:21.881193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.950 [2024-11-20 11:21:21.881286] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.950 [2024-11-20 11:21:21.881310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:38.950 { 00:11:38.950 "results": [ 00:11:38.950 { 00:11:38.950 "job": "raid_bdev1", 00:11:38.950 "core_mask": "0x1", 00:11:38.950 "workload": "randrw", 00:11:38.950 "percentage": 50, 00:11:38.950 "status": "finished", 00:11:38.950 "queue_depth": 1, 00:11:38.950 "io_size": 131072, 00:11:38.950 "runtime": 1.397957, 00:11:38.950 "iops": 12456.74938499539, 00:11:38.950 "mibps": 1557.0936731244237, 00:11:38.950 "io_failed": 0, 00:11:38.950 "io_timeout": 0, 00:11:38.950 "avg_latency_us": 76.93871086005687, 00:11:38.950 "min_latency_us": 27.612227074235808, 00:11:38.950 "max_latency_us": 1781.4917030567685 00:11:38.950 } 00:11:38.950 ], 00:11:38.950 "core_count": 1 00:11:38.950 } 00:11:38.950 11:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.950 11:21:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69370 00:11:38.950 11:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69370 ']' 00:11:38.950 11:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69370 00:11:38.950 11:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:38.950 11:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.950 11:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69370 00:11:38.950 11:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:38.950 11:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:38.950 11:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69370' 00:11:38.950 killing process with pid 69370 00:11:38.950 11:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69370 00:11:38.950 [2024-11-20 11:21:21.925289] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:38.950 11:21:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69370 00:11:39.210 [2024-11-20 11:21:22.198479] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:40.587 11:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xitDsZE2W0 00:11:40.587 11:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:40.587 11:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:40.587 11:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:40.587 11:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:40.587 11:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:40.587 11:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:40.587 11:21:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:40.587 00:11:40.587 real 0m4.867s 00:11:40.587 user 0m5.813s 00:11:40.587 sys 0m0.541s 00:11:40.587 11:21:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.587 11:21:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.587 ************************************ 00:11:40.587 END TEST raid_write_error_test 00:11:40.587 ************************************ 00:11:40.587 11:21:23 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:40.587 11:21:23 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:40.587 11:21:23 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:11:40.587 11:21:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:40.587 11:21:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.587 11:21:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:40.588 ************************************ 00:11:40.588 START TEST raid_state_function_test 00:11:40.588 ************************************ 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69513 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:40.588 Process raid pid: 69513 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69513' 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69513 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69513 ']' 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:40.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:40.588 11:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.847 [2024-11-20 11:21:23.769106] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:11:40.847 [2024-11-20 11:21:23.769247] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.847 [2024-11-20 11:21:23.951984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.107 [2024-11-20 11:21:24.091745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.365 [2024-11-20 11:21:24.337294] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.366 [2024-11-20 11:21:24.337341] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.624 11:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.624 11:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:41.624 11:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:41.624 11:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.624 11:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.624 [2024-11-20 11:21:24.699860] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:41.624 [2024-11-20 11:21:24.699925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:41.624 [2024-11-20 11:21:24.699938] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:41.624 [2024-11-20 11:21:24.699951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:41.624 [2024-11-20 11:21:24.699959] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:41.624 [2024-11-20 11:21:24.699969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:41.624 [2024-11-20 11:21:24.699977] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:41.625 [2024-11-20 11:21:24.699987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:41.625 11:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.625 11:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:41.625 11:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.625 11:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.625 11:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:41.625 11:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.625 11:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.625 11:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.625 11:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.625 11:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.625 11:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.625 11:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.625 11:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.625 11:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.625 11:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.625 11:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.886 11:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.886 "name": "Existed_Raid", 00:11:41.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.886 "strip_size_kb": 64, 00:11:41.886 "state": "configuring", 00:11:41.886 "raid_level": "raid0", 00:11:41.886 "superblock": false, 00:11:41.886 "num_base_bdevs": 4, 00:11:41.886 "num_base_bdevs_discovered": 0, 00:11:41.886 "num_base_bdevs_operational": 4, 00:11:41.886 "base_bdevs_list": [ 00:11:41.886 { 00:11:41.886 "name": "BaseBdev1", 00:11:41.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.886 "is_configured": false, 00:11:41.886 "data_offset": 0, 00:11:41.886 "data_size": 0 00:11:41.886 }, 00:11:41.886 { 00:11:41.886 "name": "BaseBdev2", 00:11:41.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.886 "is_configured": false, 00:11:41.886 "data_offset": 0, 00:11:41.886 "data_size": 0 00:11:41.886 }, 00:11:41.886 { 00:11:41.886 "name": "BaseBdev3", 00:11:41.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.886 "is_configured": false, 00:11:41.886 "data_offset": 0, 00:11:41.886 "data_size": 0 00:11:41.886 }, 00:11:41.886 { 00:11:41.886 "name": "BaseBdev4", 00:11:41.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.886 "is_configured": false, 00:11:41.886 "data_offset": 0, 00:11:41.886 "data_size": 0 00:11:41.886 } 00:11:41.886 ] 00:11:41.886 }' 00:11:41.886 11:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.886 11:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.146 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:42.146 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.146 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.146 [2024-11-20 11:21:25.163638] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:42.146 [2024-11-20 11:21:25.163694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:42.146 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.146 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:42.146 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.146 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.146 [2024-11-20 11:21:25.171620] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:42.146 [2024-11-20 11:21:25.171676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:42.146 [2024-11-20 11:21:25.171687] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:42.146 [2024-11-20 11:21:25.171698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:42.146 [2024-11-20 11:21:25.171706] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:42.146 [2024-11-20 11:21:25.171716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:42.146 [2024-11-20 11:21:25.171724] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:42.146 [2024-11-20 11:21:25.171734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:42.146 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.146 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:42.146 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.146 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.146 [2024-11-20 11:21:25.223527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:42.146 BaseBdev1 00:11:42.146 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.146 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:42.146 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:42.146 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:42.146 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:42.146 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:42.147 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:42.147 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:42.147 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.147 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.147 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.147 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:42.147 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.147 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.147 [ 00:11:42.147 { 00:11:42.147 "name": "BaseBdev1", 00:11:42.147 "aliases": [ 00:11:42.147 "f202695a-25c9-474e-ba38-c3746da70c90" 00:11:42.147 ], 00:11:42.147 "product_name": "Malloc disk", 00:11:42.147 "block_size": 512, 00:11:42.147 "num_blocks": 65536, 00:11:42.147 "uuid": "f202695a-25c9-474e-ba38-c3746da70c90", 00:11:42.147 "assigned_rate_limits": { 00:11:42.147 "rw_ios_per_sec": 0, 00:11:42.147 "rw_mbytes_per_sec": 0, 00:11:42.147 "r_mbytes_per_sec": 0, 00:11:42.147 "w_mbytes_per_sec": 0 00:11:42.147 }, 00:11:42.147 "claimed": true, 00:11:42.147 "claim_type": "exclusive_write", 00:11:42.147 "zoned": false, 00:11:42.147 "supported_io_types": { 00:11:42.147 "read": true, 00:11:42.147 "write": true, 00:11:42.147 "unmap": true, 00:11:42.147 "flush": true, 00:11:42.147 "reset": true, 00:11:42.147 "nvme_admin": false, 00:11:42.147 "nvme_io": false, 00:11:42.147 "nvme_io_md": false, 00:11:42.147 "write_zeroes": true, 00:11:42.147 "zcopy": true, 00:11:42.147 "get_zone_info": false, 00:11:42.147 "zone_management": false, 00:11:42.147 "zone_append": false, 00:11:42.147 "compare": false, 00:11:42.147 "compare_and_write": false, 00:11:42.147 "abort": true, 00:11:42.147 "seek_hole": false, 00:11:42.147 "seek_data": false, 00:11:42.147 "copy": true, 00:11:42.147 "nvme_iov_md": false 00:11:42.147 }, 00:11:42.147 "memory_domains": [ 00:11:42.147 { 00:11:42.147 "dma_device_id": "system", 00:11:42.147 "dma_device_type": 1 00:11:42.147 }, 00:11:42.147 { 00:11:42.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.147 "dma_device_type": 2 00:11:42.147 } 00:11:42.147 ], 00:11:42.147 "driver_specific": {} 00:11:42.147 } 00:11:42.147 ] 00:11:42.147 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.147 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:42.406 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:42.406 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.406 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.406 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:42.406 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.406 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.406 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.406 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.406 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.406 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.406 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.406 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.406 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.406 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.406 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.406 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.406 "name": "Existed_Raid", 00:11:42.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.406 "strip_size_kb": 64, 00:11:42.406 "state": "configuring", 00:11:42.406 "raid_level": "raid0", 00:11:42.406 "superblock": false, 00:11:42.406 "num_base_bdevs": 4, 00:11:42.406 "num_base_bdevs_discovered": 1, 00:11:42.406 "num_base_bdevs_operational": 4, 00:11:42.406 "base_bdevs_list": [ 00:11:42.406 { 00:11:42.406 "name": "BaseBdev1", 00:11:42.406 "uuid": "f202695a-25c9-474e-ba38-c3746da70c90", 00:11:42.406 "is_configured": true, 00:11:42.406 "data_offset": 0, 00:11:42.406 "data_size": 65536 00:11:42.406 }, 00:11:42.406 { 00:11:42.406 "name": "BaseBdev2", 00:11:42.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.406 "is_configured": false, 00:11:42.406 "data_offset": 0, 00:11:42.406 "data_size": 0 00:11:42.406 }, 00:11:42.406 { 00:11:42.406 "name": "BaseBdev3", 00:11:42.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.406 "is_configured": false, 00:11:42.406 "data_offset": 0, 00:11:42.406 "data_size": 0 00:11:42.406 }, 00:11:42.406 { 00:11:42.406 "name": "BaseBdev4", 00:11:42.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.406 "is_configured": false, 00:11:42.406 "data_offset": 0, 00:11:42.406 "data_size": 0 00:11:42.406 } 00:11:42.406 ] 00:11:42.406 }' 00:11:42.406 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.406 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.665 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:42.665 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.665 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.665 [2024-11-20 11:21:25.734721] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:42.665 [2024-11-20 11:21:25.734793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:42.666 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.666 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:42.666 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.666 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.666 [2024-11-20 11:21:25.742769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:42.666 [2024-11-20 11:21:25.744879] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:42.666 [2024-11-20 11:21:25.744930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:42.666 [2024-11-20 11:21:25.744942] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:42.666 [2024-11-20 11:21:25.744956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:42.666 [2024-11-20 11:21:25.744964] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:42.666 [2024-11-20 11:21:25.744974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:42.666 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.666 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:42.666 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:42.666 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:42.666 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.666 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.666 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:42.666 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.666 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.666 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.666 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.666 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.666 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.666 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.666 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.666 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.666 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.666 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.925 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.925 "name": "Existed_Raid", 00:11:42.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.925 "strip_size_kb": 64, 00:11:42.925 "state": "configuring", 00:11:42.925 "raid_level": "raid0", 00:11:42.925 "superblock": false, 00:11:42.925 "num_base_bdevs": 4, 00:11:42.925 "num_base_bdevs_discovered": 1, 00:11:42.925 "num_base_bdevs_operational": 4, 00:11:42.925 "base_bdevs_list": [ 00:11:42.925 { 00:11:42.925 "name": "BaseBdev1", 00:11:42.925 "uuid": "f202695a-25c9-474e-ba38-c3746da70c90", 00:11:42.925 "is_configured": true, 00:11:42.925 "data_offset": 0, 00:11:42.925 "data_size": 65536 00:11:42.925 }, 00:11:42.925 { 00:11:42.925 "name": "BaseBdev2", 00:11:42.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.925 "is_configured": false, 00:11:42.925 "data_offset": 0, 00:11:42.925 "data_size": 0 00:11:42.925 }, 00:11:42.925 { 00:11:42.925 "name": "BaseBdev3", 00:11:42.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.925 "is_configured": false, 00:11:42.925 "data_offset": 0, 00:11:42.925 "data_size": 0 00:11:42.925 }, 00:11:42.925 { 00:11:42.925 "name": "BaseBdev4", 00:11:42.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.925 "is_configured": false, 00:11:42.925 "data_offset": 0, 00:11:42.925 "data_size": 0 00:11:42.925 } 00:11:42.925 ] 00:11:42.925 }' 00:11:42.925 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.925 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.184 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:43.184 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.184 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.185 [2024-11-20 11:21:26.247780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:43.185 BaseBdev2 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.185 [ 00:11:43.185 { 00:11:43.185 "name": "BaseBdev2", 00:11:43.185 "aliases": [ 00:11:43.185 "74e0ded1-f6c7-4e59-a05f-3ec1a8a292f0" 00:11:43.185 ], 00:11:43.185 "product_name": "Malloc disk", 00:11:43.185 "block_size": 512, 00:11:43.185 "num_blocks": 65536, 00:11:43.185 "uuid": "74e0ded1-f6c7-4e59-a05f-3ec1a8a292f0", 00:11:43.185 "assigned_rate_limits": { 00:11:43.185 "rw_ios_per_sec": 0, 00:11:43.185 "rw_mbytes_per_sec": 0, 00:11:43.185 "r_mbytes_per_sec": 0, 00:11:43.185 "w_mbytes_per_sec": 0 00:11:43.185 }, 00:11:43.185 "claimed": true, 00:11:43.185 "claim_type": "exclusive_write", 00:11:43.185 "zoned": false, 00:11:43.185 "supported_io_types": { 00:11:43.185 "read": true, 00:11:43.185 "write": true, 00:11:43.185 "unmap": true, 00:11:43.185 "flush": true, 00:11:43.185 "reset": true, 00:11:43.185 "nvme_admin": false, 00:11:43.185 "nvme_io": false, 00:11:43.185 "nvme_io_md": false, 00:11:43.185 "write_zeroes": true, 00:11:43.185 "zcopy": true, 00:11:43.185 "get_zone_info": false, 00:11:43.185 "zone_management": false, 00:11:43.185 "zone_append": false, 00:11:43.185 "compare": false, 00:11:43.185 "compare_and_write": false, 00:11:43.185 "abort": true, 00:11:43.185 "seek_hole": false, 00:11:43.185 "seek_data": false, 00:11:43.185 "copy": true, 00:11:43.185 "nvme_iov_md": false 00:11:43.185 }, 00:11:43.185 "memory_domains": [ 00:11:43.185 { 00:11:43.185 "dma_device_id": "system", 00:11:43.185 "dma_device_type": 1 00:11:43.185 }, 00:11:43.185 { 00:11:43.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.185 "dma_device_type": 2 00:11:43.185 } 00:11:43.185 ], 00:11:43.185 "driver_specific": {} 00:11:43.185 } 00:11:43.185 ] 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.185 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.444 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.444 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.444 "name": "Existed_Raid", 00:11:43.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.444 "strip_size_kb": 64, 00:11:43.444 "state": "configuring", 00:11:43.444 "raid_level": "raid0", 00:11:43.444 "superblock": false, 00:11:43.444 "num_base_bdevs": 4, 00:11:43.444 "num_base_bdevs_discovered": 2, 00:11:43.444 "num_base_bdevs_operational": 4, 00:11:43.444 "base_bdevs_list": [ 00:11:43.444 { 00:11:43.444 "name": "BaseBdev1", 00:11:43.444 "uuid": "f202695a-25c9-474e-ba38-c3746da70c90", 00:11:43.444 "is_configured": true, 00:11:43.444 "data_offset": 0, 00:11:43.444 "data_size": 65536 00:11:43.444 }, 00:11:43.444 { 00:11:43.444 "name": "BaseBdev2", 00:11:43.444 "uuid": "74e0ded1-f6c7-4e59-a05f-3ec1a8a292f0", 00:11:43.444 "is_configured": true, 00:11:43.444 "data_offset": 0, 00:11:43.444 "data_size": 65536 00:11:43.444 }, 00:11:43.444 { 00:11:43.444 "name": "BaseBdev3", 00:11:43.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.444 "is_configured": false, 00:11:43.444 "data_offset": 0, 00:11:43.444 "data_size": 0 00:11:43.444 }, 00:11:43.444 { 00:11:43.444 "name": "BaseBdev4", 00:11:43.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.444 "is_configured": false, 00:11:43.444 "data_offset": 0, 00:11:43.444 "data_size": 0 00:11:43.444 } 00:11:43.444 ] 00:11:43.444 }' 00:11:43.444 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.444 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.703 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:43.703 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.703 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.963 [2024-11-20 11:21:26.818349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:43.963 BaseBdev3 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.963 [ 00:11:43.963 { 00:11:43.963 "name": "BaseBdev3", 00:11:43.963 "aliases": [ 00:11:43.963 "598a91f5-e434-4b94-904a-5a4da77f2d9a" 00:11:43.963 ], 00:11:43.963 "product_name": "Malloc disk", 00:11:43.963 "block_size": 512, 00:11:43.963 "num_blocks": 65536, 00:11:43.963 "uuid": "598a91f5-e434-4b94-904a-5a4da77f2d9a", 00:11:43.963 "assigned_rate_limits": { 00:11:43.963 "rw_ios_per_sec": 0, 00:11:43.963 "rw_mbytes_per_sec": 0, 00:11:43.963 "r_mbytes_per_sec": 0, 00:11:43.963 "w_mbytes_per_sec": 0 00:11:43.963 }, 00:11:43.963 "claimed": true, 00:11:43.963 "claim_type": "exclusive_write", 00:11:43.963 "zoned": false, 00:11:43.963 "supported_io_types": { 00:11:43.963 "read": true, 00:11:43.963 "write": true, 00:11:43.963 "unmap": true, 00:11:43.963 "flush": true, 00:11:43.963 "reset": true, 00:11:43.963 "nvme_admin": false, 00:11:43.963 "nvme_io": false, 00:11:43.963 "nvme_io_md": false, 00:11:43.963 "write_zeroes": true, 00:11:43.963 "zcopy": true, 00:11:43.963 "get_zone_info": false, 00:11:43.963 "zone_management": false, 00:11:43.963 "zone_append": false, 00:11:43.963 "compare": false, 00:11:43.963 "compare_and_write": false, 00:11:43.963 "abort": true, 00:11:43.963 "seek_hole": false, 00:11:43.963 "seek_data": false, 00:11:43.963 "copy": true, 00:11:43.963 "nvme_iov_md": false 00:11:43.963 }, 00:11:43.963 "memory_domains": [ 00:11:43.963 { 00:11:43.963 "dma_device_id": "system", 00:11:43.963 "dma_device_type": 1 00:11:43.963 }, 00:11:43.963 { 00:11:43.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.963 "dma_device_type": 2 00:11:43.963 } 00:11:43.963 ], 00:11:43.963 "driver_specific": {} 00:11:43.963 } 00:11:43.963 ] 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.963 "name": "Existed_Raid", 00:11:43.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.963 "strip_size_kb": 64, 00:11:43.963 "state": "configuring", 00:11:43.963 "raid_level": "raid0", 00:11:43.963 "superblock": false, 00:11:43.963 "num_base_bdevs": 4, 00:11:43.963 "num_base_bdevs_discovered": 3, 00:11:43.963 "num_base_bdevs_operational": 4, 00:11:43.963 "base_bdevs_list": [ 00:11:43.963 { 00:11:43.963 "name": "BaseBdev1", 00:11:43.963 "uuid": "f202695a-25c9-474e-ba38-c3746da70c90", 00:11:43.963 "is_configured": true, 00:11:43.963 "data_offset": 0, 00:11:43.963 "data_size": 65536 00:11:43.963 }, 00:11:43.963 { 00:11:43.963 "name": "BaseBdev2", 00:11:43.963 "uuid": "74e0ded1-f6c7-4e59-a05f-3ec1a8a292f0", 00:11:43.963 "is_configured": true, 00:11:43.963 "data_offset": 0, 00:11:43.963 "data_size": 65536 00:11:43.963 }, 00:11:43.963 { 00:11:43.963 "name": "BaseBdev3", 00:11:43.963 "uuid": "598a91f5-e434-4b94-904a-5a4da77f2d9a", 00:11:43.963 "is_configured": true, 00:11:43.963 "data_offset": 0, 00:11:43.963 "data_size": 65536 00:11:43.963 }, 00:11:43.963 { 00:11:43.963 "name": "BaseBdev4", 00:11:43.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.963 "is_configured": false, 00:11:43.963 "data_offset": 0, 00:11:43.963 "data_size": 0 00:11:43.963 } 00:11:43.963 ] 00:11:43.963 }' 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.963 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.222 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:44.222 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.222 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.222 [2024-11-20 11:21:27.321443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:44.222 [2024-11-20 11:21:27.321536] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:44.222 [2024-11-20 11:21:27.321548] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:44.222 [2024-11-20 11:21:27.321869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:44.222 [2024-11-20 11:21:27.322072] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:44.222 [2024-11-20 11:21:27.322097] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:44.222 [2024-11-20 11:21:27.322417] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.222 BaseBdev4 00:11:44.222 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.222 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:44.222 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:44.222 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:44.222 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:44.222 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:44.222 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:44.222 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:44.222 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.222 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.481 [ 00:11:44.481 { 00:11:44.481 "name": "BaseBdev4", 00:11:44.481 "aliases": [ 00:11:44.481 "5daf66af-3a6d-4efb-8f0f-0450f5ac2d91" 00:11:44.481 ], 00:11:44.481 "product_name": "Malloc disk", 00:11:44.481 "block_size": 512, 00:11:44.481 "num_blocks": 65536, 00:11:44.481 "uuid": "5daf66af-3a6d-4efb-8f0f-0450f5ac2d91", 00:11:44.481 "assigned_rate_limits": { 00:11:44.481 "rw_ios_per_sec": 0, 00:11:44.481 "rw_mbytes_per_sec": 0, 00:11:44.481 "r_mbytes_per_sec": 0, 00:11:44.481 "w_mbytes_per_sec": 0 00:11:44.481 }, 00:11:44.481 "claimed": true, 00:11:44.481 "claim_type": "exclusive_write", 00:11:44.481 "zoned": false, 00:11:44.481 "supported_io_types": { 00:11:44.481 "read": true, 00:11:44.481 "write": true, 00:11:44.481 "unmap": true, 00:11:44.481 "flush": true, 00:11:44.481 "reset": true, 00:11:44.481 "nvme_admin": false, 00:11:44.481 "nvme_io": false, 00:11:44.481 "nvme_io_md": false, 00:11:44.481 "write_zeroes": true, 00:11:44.481 "zcopy": true, 00:11:44.481 "get_zone_info": false, 00:11:44.481 "zone_management": false, 00:11:44.481 "zone_append": false, 00:11:44.481 "compare": false, 00:11:44.481 "compare_and_write": false, 00:11:44.481 "abort": true, 00:11:44.481 "seek_hole": false, 00:11:44.481 "seek_data": false, 00:11:44.481 "copy": true, 00:11:44.481 "nvme_iov_md": false 00:11:44.481 }, 00:11:44.481 "memory_domains": [ 00:11:44.481 { 00:11:44.481 "dma_device_id": "system", 00:11:44.481 "dma_device_type": 1 00:11:44.481 }, 00:11:44.481 { 00:11:44.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.481 "dma_device_type": 2 00:11:44.481 } 00:11:44.481 ], 00:11:44.481 "driver_specific": {} 00:11:44.481 } 00:11:44.481 ] 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.481 "name": "Existed_Raid", 00:11:44.481 "uuid": "efbaa36d-132e-4b78-b4df-e6c4a33896b0", 00:11:44.481 "strip_size_kb": 64, 00:11:44.481 "state": "online", 00:11:44.481 "raid_level": "raid0", 00:11:44.481 "superblock": false, 00:11:44.481 "num_base_bdevs": 4, 00:11:44.481 "num_base_bdevs_discovered": 4, 00:11:44.481 "num_base_bdevs_operational": 4, 00:11:44.481 "base_bdevs_list": [ 00:11:44.481 { 00:11:44.481 "name": "BaseBdev1", 00:11:44.481 "uuid": "f202695a-25c9-474e-ba38-c3746da70c90", 00:11:44.481 "is_configured": true, 00:11:44.481 "data_offset": 0, 00:11:44.481 "data_size": 65536 00:11:44.481 }, 00:11:44.481 { 00:11:44.481 "name": "BaseBdev2", 00:11:44.481 "uuid": "74e0ded1-f6c7-4e59-a05f-3ec1a8a292f0", 00:11:44.481 "is_configured": true, 00:11:44.481 "data_offset": 0, 00:11:44.481 "data_size": 65536 00:11:44.481 }, 00:11:44.481 { 00:11:44.481 "name": "BaseBdev3", 00:11:44.481 "uuid": "598a91f5-e434-4b94-904a-5a4da77f2d9a", 00:11:44.481 "is_configured": true, 00:11:44.481 "data_offset": 0, 00:11:44.481 "data_size": 65536 00:11:44.481 }, 00:11:44.481 { 00:11:44.481 "name": "BaseBdev4", 00:11:44.481 "uuid": "5daf66af-3a6d-4efb-8f0f-0450f5ac2d91", 00:11:44.481 "is_configured": true, 00:11:44.481 "data_offset": 0, 00:11:44.481 "data_size": 65536 00:11:44.481 } 00:11:44.481 ] 00:11:44.481 }' 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.481 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.740 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:44.740 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:44.740 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:44.740 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:44.740 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:44.740 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:44.740 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:44.740 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:44.740 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.740 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.740 [2024-11-20 11:21:27.789129] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:44.740 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.740 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:44.740 "name": "Existed_Raid", 00:11:44.740 "aliases": [ 00:11:44.740 "efbaa36d-132e-4b78-b4df-e6c4a33896b0" 00:11:44.740 ], 00:11:44.740 "product_name": "Raid Volume", 00:11:44.740 "block_size": 512, 00:11:44.740 "num_blocks": 262144, 00:11:44.740 "uuid": "efbaa36d-132e-4b78-b4df-e6c4a33896b0", 00:11:44.740 "assigned_rate_limits": { 00:11:44.740 "rw_ios_per_sec": 0, 00:11:44.740 "rw_mbytes_per_sec": 0, 00:11:44.740 "r_mbytes_per_sec": 0, 00:11:44.740 "w_mbytes_per_sec": 0 00:11:44.740 }, 00:11:44.740 "claimed": false, 00:11:44.740 "zoned": false, 00:11:44.740 "supported_io_types": { 00:11:44.740 "read": true, 00:11:44.740 "write": true, 00:11:44.740 "unmap": true, 00:11:44.740 "flush": true, 00:11:44.740 "reset": true, 00:11:44.740 "nvme_admin": false, 00:11:44.740 "nvme_io": false, 00:11:44.740 "nvme_io_md": false, 00:11:44.740 "write_zeroes": true, 00:11:44.740 "zcopy": false, 00:11:44.740 "get_zone_info": false, 00:11:44.740 "zone_management": false, 00:11:44.740 "zone_append": false, 00:11:44.740 "compare": false, 00:11:44.740 "compare_and_write": false, 00:11:44.740 "abort": false, 00:11:44.740 "seek_hole": false, 00:11:44.740 "seek_data": false, 00:11:44.740 "copy": false, 00:11:44.740 "nvme_iov_md": false 00:11:44.740 }, 00:11:44.740 "memory_domains": [ 00:11:44.740 { 00:11:44.740 "dma_device_id": "system", 00:11:44.740 "dma_device_type": 1 00:11:44.740 }, 00:11:44.740 { 00:11:44.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.740 "dma_device_type": 2 00:11:44.740 }, 00:11:44.740 { 00:11:44.740 "dma_device_id": "system", 00:11:44.740 "dma_device_type": 1 00:11:44.740 }, 00:11:44.740 { 00:11:44.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.740 "dma_device_type": 2 00:11:44.740 }, 00:11:44.740 { 00:11:44.740 "dma_device_id": "system", 00:11:44.740 "dma_device_type": 1 00:11:44.740 }, 00:11:44.740 { 00:11:44.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.740 "dma_device_type": 2 00:11:44.740 }, 00:11:44.740 { 00:11:44.740 "dma_device_id": "system", 00:11:44.740 "dma_device_type": 1 00:11:44.740 }, 00:11:44.740 { 00:11:44.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.740 "dma_device_type": 2 00:11:44.740 } 00:11:44.740 ], 00:11:44.740 "driver_specific": { 00:11:44.740 "raid": { 00:11:44.740 "uuid": "efbaa36d-132e-4b78-b4df-e6c4a33896b0", 00:11:44.740 "strip_size_kb": 64, 00:11:44.740 "state": "online", 00:11:44.740 "raid_level": "raid0", 00:11:44.740 "superblock": false, 00:11:44.740 "num_base_bdevs": 4, 00:11:44.740 "num_base_bdevs_discovered": 4, 00:11:44.740 "num_base_bdevs_operational": 4, 00:11:44.740 "base_bdevs_list": [ 00:11:44.740 { 00:11:44.740 "name": "BaseBdev1", 00:11:44.740 "uuid": "f202695a-25c9-474e-ba38-c3746da70c90", 00:11:44.740 "is_configured": true, 00:11:44.740 "data_offset": 0, 00:11:44.740 "data_size": 65536 00:11:44.740 }, 00:11:44.740 { 00:11:44.740 "name": "BaseBdev2", 00:11:44.740 "uuid": "74e0ded1-f6c7-4e59-a05f-3ec1a8a292f0", 00:11:44.740 "is_configured": true, 00:11:44.740 "data_offset": 0, 00:11:44.740 "data_size": 65536 00:11:44.740 }, 00:11:44.740 { 00:11:44.740 "name": "BaseBdev3", 00:11:44.740 "uuid": "598a91f5-e434-4b94-904a-5a4da77f2d9a", 00:11:44.740 "is_configured": true, 00:11:44.740 "data_offset": 0, 00:11:44.740 "data_size": 65536 00:11:44.740 }, 00:11:44.740 { 00:11:44.740 "name": "BaseBdev4", 00:11:44.740 "uuid": "5daf66af-3a6d-4efb-8f0f-0450f5ac2d91", 00:11:44.740 "is_configured": true, 00:11:44.740 "data_offset": 0, 00:11:44.740 "data_size": 65536 00:11:44.740 } 00:11:44.740 ] 00:11:44.740 } 00:11:44.740 } 00:11:44.740 }' 00:11:44.740 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:45.000 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:45.000 BaseBdev2 00:11:45.000 BaseBdev3 00:11:45.000 BaseBdev4' 00:11:45.000 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.000 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:45.000 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.000 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.000 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:45.000 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.000 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.000 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.000 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.000 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.000 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.000 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:45.000 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.000 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.000 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.000 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.000 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.000 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.000 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.000 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:45.000 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.000 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.000 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.000 11:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.000 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.000 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.000 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.000 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:45.000 11:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.000 11:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.000 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.000 11:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.000 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.000 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.000 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:45.000 11:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.000 11:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.000 [2024-11-20 11:21:28.100535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:45.000 [2024-11-20 11:21:28.100573] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:45.000 [2024-11-20 11:21:28.100633] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:45.260 11:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.260 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:45.260 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:45.260 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:45.260 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:45.260 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:45.260 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:45.260 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.260 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:45.260 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:45.260 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.260 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.260 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.260 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.260 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.260 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.260 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.260 11:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.260 11:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.260 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.260 11:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.260 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.260 "name": "Existed_Raid", 00:11:45.260 "uuid": "efbaa36d-132e-4b78-b4df-e6c4a33896b0", 00:11:45.260 "strip_size_kb": 64, 00:11:45.260 "state": "offline", 00:11:45.260 "raid_level": "raid0", 00:11:45.260 "superblock": false, 00:11:45.260 "num_base_bdevs": 4, 00:11:45.260 "num_base_bdevs_discovered": 3, 00:11:45.260 "num_base_bdevs_operational": 3, 00:11:45.260 "base_bdevs_list": [ 00:11:45.260 { 00:11:45.260 "name": null, 00:11:45.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.260 "is_configured": false, 00:11:45.260 "data_offset": 0, 00:11:45.260 "data_size": 65536 00:11:45.260 }, 00:11:45.260 { 00:11:45.260 "name": "BaseBdev2", 00:11:45.260 "uuid": "74e0ded1-f6c7-4e59-a05f-3ec1a8a292f0", 00:11:45.260 "is_configured": true, 00:11:45.260 "data_offset": 0, 00:11:45.260 "data_size": 65536 00:11:45.260 }, 00:11:45.260 { 00:11:45.260 "name": "BaseBdev3", 00:11:45.260 "uuid": "598a91f5-e434-4b94-904a-5a4da77f2d9a", 00:11:45.260 "is_configured": true, 00:11:45.260 "data_offset": 0, 00:11:45.260 "data_size": 65536 00:11:45.260 }, 00:11:45.260 { 00:11:45.260 "name": "BaseBdev4", 00:11:45.260 "uuid": "5daf66af-3a6d-4efb-8f0f-0450f5ac2d91", 00:11:45.260 "is_configured": true, 00:11:45.260 "data_offset": 0, 00:11:45.260 "data_size": 65536 00:11:45.260 } 00:11:45.260 ] 00:11:45.260 }' 00:11:45.260 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.260 11:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.828 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:45.828 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:45.828 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.828 11:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.828 11:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.828 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:45.828 11:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.828 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:45.828 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:45.828 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:45.828 11:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.828 11:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.828 [2024-11-20 11:21:28.740057] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:45.828 11:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.828 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:45.828 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:45.828 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:45.828 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.828 11:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.828 11:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.828 11:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.828 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:45.828 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:45.828 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:45.828 11:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.828 11:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.828 [2024-11-20 11:21:28.911386] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:46.086 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.086 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:46.086 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:46.086 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:46.086 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.086 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.086 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.086 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.086 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:46.087 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:46.087 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:46.087 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.087 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.087 [2024-11-20 11:21:29.078712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:46.087 [2024-11-20 11:21:29.078779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:46.087 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.087 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:46.087 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:46.087 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.087 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.087 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.346 BaseBdev2 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.346 [ 00:11:46.346 { 00:11:46.346 "name": "BaseBdev2", 00:11:46.346 "aliases": [ 00:11:46.346 "ed71dd79-cdb6-48a8-8e34-b1b82f50b927" 00:11:46.346 ], 00:11:46.346 "product_name": "Malloc disk", 00:11:46.346 "block_size": 512, 00:11:46.346 "num_blocks": 65536, 00:11:46.346 "uuid": "ed71dd79-cdb6-48a8-8e34-b1b82f50b927", 00:11:46.346 "assigned_rate_limits": { 00:11:46.346 "rw_ios_per_sec": 0, 00:11:46.346 "rw_mbytes_per_sec": 0, 00:11:46.346 "r_mbytes_per_sec": 0, 00:11:46.346 "w_mbytes_per_sec": 0 00:11:46.346 }, 00:11:46.346 "claimed": false, 00:11:46.346 "zoned": false, 00:11:46.346 "supported_io_types": { 00:11:46.346 "read": true, 00:11:46.346 "write": true, 00:11:46.346 "unmap": true, 00:11:46.346 "flush": true, 00:11:46.346 "reset": true, 00:11:46.346 "nvme_admin": false, 00:11:46.346 "nvme_io": false, 00:11:46.346 "nvme_io_md": false, 00:11:46.346 "write_zeroes": true, 00:11:46.346 "zcopy": true, 00:11:46.346 "get_zone_info": false, 00:11:46.346 "zone_management": false, 00:11:46.346 "zone_append": false, 00:11:46.346 "compare": false, 00:11:46.346 "compare_and_write": false, 00:11:46.346 "abort": true, 00:11:46.346 "seek_hole": false, 00:11:46.346 "seek_data": false, 00:11:46.346 "copy": true, 00:11:46.346 "nvme_iov_md": false 00:11:46.346 }, 00:11:46.346 "memory_domains": [ 00:11:46.346 { 00:11:46.346 "dma_device_id": "system", 00:11:46.346 "dma_device_type": 1 00:11:46.346 }, 00:11:46.346 { 00:11:46.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.346 "dma_device_type": 2 00:11:46.346 } 00:11:46.346 ], 00:11:46.346 "driver_specific": {} 00:11:46.346 } 00:11:46.346 ] 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.346 BaseBdev3 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.346 [ 00:11:46.346 { 00:11:46.346 "name": "BaseBdev3", 00:11:46.346 "aliases": [ 00:11:46.346 "19af71c3-e61d-44e8-b1a1-89a0dd0b1b70" 00:11:46.346 ], 00:11:46.346 "product_name": "Malloc disk", 00:11:46.346 "block_size": 512, 00:11:46.346 "num_blocks": 65536, 00:11:46.346 "uuid": "19af71c3-e61d-44e8-b1a1-89a0dd0b1b70", 00:11:46.346 "assigned_rate_limits": { 00:11:46.346 "rw_ios_per_sec": 0, 00:11:46.346 "rw_mbytes_per_sec": 0, 00:11:46.346 "r_mbytes_per_sec": 0, 00:11:46.346 "w_mbytes_per_sec": 0 00:11:46.346 }, 00:11:46.346 "claimed": false, 00:11:46.346 "zoned": false, 00:11:46.346 "supported_io_types": { 00:11:46.346 "read": true, 00:11:46.346 "write": true, 00:11:46.346 "unmap": true, 00:11:46.346 "flush": true, 00:11:46.346 "reset": true, 00:11:46.346 "nvme_admin": false, 00:11:46.346 "nvme_io": false, 00:11:46.346 "nvme_io_md": false, 00:11:46.346 "write_zeroes": true, 00:11:46.346 "zcopy": true, 00:11:46.346 "get_zone_info": false, 00:11:46.346 "zone_management": false, 00:11:46.346 "zone_append": false, 00:11:46.346 "compare": false, 00:11:46.346 "compare_and_write": false, 00:11:46.346 "abort": true, 00:11:46.346 "seek_hole": false, 00:11:46.346 "seek_data": false, 00:11:46.346 "copy": true, 00:11:46.346 "nvme_iov_md": false 00:11:46.346 }, 00:11:46.346 "memory_domains": [ 00:11:46.346 { 00:11:46.346 "dma_device_id": "system", 00:11:46.346 "dma_device_type": 1 00:11:46.346 }, 00:11:46.346 { 00:11:46.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.346 "dma_device_type": 2 00:11:46.346 } 00:11:46.346 ], 00:11:46.346 "driver_specific": {} 00:11:46.346 } 00:11:46.346 ] 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.346 BaseBdev4 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.346 [ 00:11:46.346 { 00:11:46.346 "name": "BaseBdev4", 00:11:46.346 "aliases": [ 00:11:46.346 "cbe3354c-bf3a-4057-8d7c-2eb8815a5892" 00:11:46.346 ], 00:11:46.346 "product_name": "Malloc disk", 00:11:46.346 "block_size": 512, 00:11:46.346 "num_blocks": 65536, 00:11:46.346 "uuid": "cbe3354c-bf3a-4057-8d7c-2eb8815a5892", 00:11:46.346 "assigned_rate_limits": { 00:11:46.346 "rw_ios_per_sec": 0, 00:11:46.346 "rw_mbytes_per_sec": 0, 00:11:46.346 "r_mbytes_per_sec": 0, 00:11:46.346 "w_mbytes_per_sec": 0 00:11:46.346 }, 00:11:46.346 "claimed": false, 00:11:46.346 "zoned": false, 00:11:46.346 "supported_io_types": { 00:11:46.346 "read": true, 00:11:46.346 "write": true, 00:11:46.346 "unmap": true, 00:11:46.346 "flush": true, 00:11:46.346 "reset": true, 00:11:46.346 "nvme_admin": false, 00:11:46.346 "nvme_io": false, 00:11:46.346 "nvme_io_md": false, 00:11:46.346 "write_zeroes": true, 00:11:46.346 "zcopy": true, 00:11:46.346 "get_zone_info": false, 00:11:46.346 "zone_management": false, 00:11:46.346 "zone_append": false, 00:11:46.346 "compare": false, 00:11:46.346 "compare_and_write": false, 00:11:46.346 "abort": true, 00:11:46.346 "seek_hole": false, 00:11:46.346 "seek_data": false, 00:11:46.346 "copy": true, 00:11:46.346 "nvme_iov_md": false 00:11:46.346 }, 00:11:46.346 "memory_domains": [ 00:11:46.346 { 00:11:46.346 "dma_device_id": "system", 00:11:46.346 "dma_device_type": 1 00:11:46.346 }, 00:11:46.346 { 00:11:46.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.346 "dma_device_type": 2 00:11:46.346 } 00:11:46.346 ], 00:11:46.346 "driver_specific": {} 00:11:46.346 } 00:11:46.346 ] 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.346 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.606 [2024-11-20 11:21:29.460834] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:46.606 [2024-11-20 11:21:29.460885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:46.606 [2024-11-20 11:21:29.460915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:46.606 [2024-11-20 11:21:29.463061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:46.606 [2024-11-20 11:21:29.463127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:46.606 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.606 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:46.606 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.606 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.606 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:46.606 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.606 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.606 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.606 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.606 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.606 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.606 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.606 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.606 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.606 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.606 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.606 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.606 "name": "Existed_Raid", 00:11:46.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.606 "strip_size_kb": 64, 00:11:46.606 "state": "configuring", 00:11:46.606 "raid_level": "raid0", 00:11:46.606 "superblock": false, 00:11:46.606 "num_base_bdevs": 4, 00:11:46.606 "num_base_bdevs_discovered": 3, 00:11:46.606 "num_base_bdevs_operational": 4, 00:11:46.606 "base_bdevs_list": [ 00:11:46.606 { 00:11:46.606 "name": "BaseBdev1", 00:11:46.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.606 "is_configured": false, 00:11:46.606 "data_offset": 0, 00:11:46.606 "data_size": 0 00:11:46.606 }, 00:11:46.606 { 00:11:46.606 "name": "BaseBdev2", 00:11:46.606 "uuid": "ed71dd79-cdb6-48a8-8e34-b1b82f50b927", 00:11:46.606 "is_configured": true, 00:11:46.606 "data_offset": 0, 00:11:46.606 "data_size": 65536 00:11:46.606 }, 00:11:46.606 { 00:11:46.606 "name": "BaseBdev3", 00:11:46.606 "uuid": "19af71c3-e61d-44e8-b1a1-89a0dd0b1b70", 00:11:46.606 "is_configured": true, 00:11:46.606 "data_offset": 0, 00:11:46.606 "data_size": 65536 00:11:46.606 }, 00:11:46.606 { 00:11:46.606 "name": "BaseBdev4", 00:11:46.606 "uuid": "cbe3354c-bf3a-4057-8d7c-2eb8815a5892", 00:11:46.606 "is_configured": true, 00:11:46.606 "data_offset": 0, 00:11:46.606 "data_size": 65536 00:11:46.606 } 00:11:46.606 ] 00:11:46.606 }' 00:11:46.606 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.606 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.865 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:46.865 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.865 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.865 [2024-11-20 11:21:29.884244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:46.865 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.865 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:46.865 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.865 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.865 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:46.865 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.865 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.865 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.865 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.865 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.865 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.865 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.865 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.865 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.865 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.865 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.865 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.865 "name": "Existed_Raid", 00:11:46.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.865 "strip_size_kb": 64, 00:11:46.865 "state": "configuring", 00:11:46.865 "raid_level": "raid0", 00:11:46.865 "superblock": false, 00:11:46.865 "num_base_bdevs": 4, 00:11:46.865 "num_base_bdevs_discovered": 2, 00:11:46.865 "num_base_bdevs_operational": 4, 00:11:46.865 "base_bdevs_list": [ 00:11:46.865 { 00:11:46.865 "name": "BaseBdev1", 00:11:46.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.865 "is_configured": false, 00:11:46.865 "data_offset": 0, 00:11:46.865 "data_size": 0 00:11:46.865 }, 00:11:46.865 { 00:11:46.865 "name": null, 00:11:46.865 "uuid": "ed71dd79-cdb6-48a8-8e34-b1b82f50b927", 00:11:46.865 "is_configured": false, 00:11:46.865 "data_offset": 0, 00:11:46.865 "data_size": 65536 00:11:46.865 }, 00:11:46.865 { 00:11:46.865 "name": "BaseBdev3", 00:11:46.865 "uuid": "19af71c3-e61d-44e8-b1a1-89a0dd0b1b70", 00:11:46.865 "is_configured": true, 00:11:46.865 "data_offset": 0, 00:11:46.865 "data_size": 65536 00:11:46.865 }, 00:11:46.865 { 00:11:46.865 "name": "BaseBdev4", 00:11:46.865 "uuid": "cbe3354c-bf3a-4057-8d7c-2eb8815a5892", 00:11:46.865 "is_configured": true, 00:11:46.865 "data_offset": 0, 00:11:46.865 "data_size": 65536 00:11:46.865 } 00:11:46.865 ] 00:11:46.865 }' 00:11:46.865 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.865 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.433 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.433 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:47.433 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.433 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.433 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.433 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 [2024-11-20 11:21:30.435936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:47.434 BaseBdev1 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 [ 00:11:47.434 { 00:11:47.434 "name": "BaseBdev1", 00:11:47.434 "aliases": [ 00:11:47.434 "9da78495-d042-476f-9311-421e400f93e7" 00:11:47.434 ], 00:11:47.434 "product_name": "Malloc disk", 00:11:47.434 "block_size": 512, 00:11:47.434 "num_blocks": 65536, 00:11:47.434 "uuid": "9da78495-d042-476f-9311-421e400f93e7", 00:11:47.434 "assigned_rate_limits": { 00:11:47.434 "rw_ios_per_sec": 0, 00:11:47.434 "rw_mbytes_per_sec": 0, 00:11:47.434 "r_mbytes_per_sec": 0, 00:11:47.434 "w_mbytes_per_sec": 0 00:11:47.434 }, 00:11:47.434 "claimed": true, 00:11:47.434 "claim_type": "exclusive_write", 00:11:47.434 "zoned": false, 00:11:47.434 "supported_io_types": { 00:11:47.434 "read": true, 00:11:47.434 "write": true, 00:11:47.434 "unmap": true, 00:11:47.434 "flush": true, 00:11:47.434 "reset": true, 00:11:47.434 "nvme_admin": false, 00:11:47.434 "nvme_io": false, 00:11:47.434 "nvme_io_md": false, 00:11:47.434 "write_zeroes": true, 00:11:47.434 "zcopy": true, 00:11:47.434 "get_zone_info": false, 00:11:47.434 "zone_management": false, 00:11:47.434 "zone_append": false, 00:11:47.434 "compare": false, 00:11:47.434 "compare_and_write": false, 00:11:47.434 "abort": true, 00:11:47.434 "seek_hole": false, 00:11:47.434 "seek_data": false, 00:11:47.434 "copy": true, 00:11:47.434 "nvme_iov_md": false 00:11:47.434 }, 00:11:47.434 "memory_domains": [ 00:11:47.434 { 00:11:47.434 "dma_device_id": "system", 00:11:47.434 "dma_device_type": 1 00:11:47.434 }, 00:11:47.434 { 00:11:47.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.434 "dma_device_type": 2 00:11:47.434 } 00:11:47.434 ], 00:11:47.434 "driver_specific": {} 00:11:47.434 } 00:11:47.434 ] 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.434 "name": "Existed_Raid", 00:11:47.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.434 "strip_size_kb": 64, 00:11:47.434 "state": "configuring", 00:11:47.434 "raid_level": "raid0", 00:11:47.434 "superblock": false, 00:11:47.434 "num_base_bdevs": 4, 00:11:47.434 "num_base_bdevs_discovered": 3, 00:11:47.434 "num_base_bdevs_operational": 4, 00:11:47.434 "base_bdevs_list": [ 00:11:47.434 { 00:11:47.434 "name": "BaseBdev1", 00:11:47.434 "uuid": "9da78495-d042-476f-9311-421e400f93e7", 00:11:47.434 "is_configured": true, 00:11:47.434 "data_offset": 0, 00:11:47.434 "data_size": 65536 00:11:47.434 }, 00:11:47.434 { 00:11:47.434 "name": null, 00:11:47.434 "uuid": "ed71dd79-cdb6-48a8-8e34-b1b82f50b927", 00:11:47.434 "is_configured": false, 00:11:47.434 "data_offset": 0, 00:11:47.434 "data_size": 65536 00:11:47.434 }, 00:11:47.434 { 00:11:47.434 "name": "BaseBdev3", 00:11:47.434 "uuid": "19af71c3-e61d-44e8-b1a1-89a0dd0b1b70", 00:11:47.434 "is_configured": true, 00:11:47.434 "data_offset": 0, 00:11:47.434 "data_size": 65536 00:11:47.434 }, 00:11:47.434 { 00:11:47.434 "name": "BaseBdev4", 00:11:47.434 "uuid": "cbe3354c-bf3a-4057-8d7c-2eb8815a5892", 00:11:47.434 "is_configured": true, 00:11:47.434 "data_offset": 0, 00:11:47.434 "data_size": 65536 00:11:47.434 } 00:11:47.434 ] 00:11:47.434 }' 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.434 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.001 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.001 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.001 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:48.001 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.001 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.001 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:48.001 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:48.001 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.001 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.001 [2024-11-20 11:21:30.967556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:48.001 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.001 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:48.001 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.001 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.001 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:48.001 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.001 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.001 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.001 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.001 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.001 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.001 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.001 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.001 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.001 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.001 11:21:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.001 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.001 "name": "Existed_Raid", 00:11:48.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.001 "strip_size_kb": 64, 00:11:48.001 "state": "configuring", 00:11:48.001 "raid_level": "raid0", 00:11:48.001 "superblock": false, 00:11:48.001 "num_base_bdevs": 4, 00:11:48.001 "num_base_bdevs_discovered": 2, 00:11:48.001 "num_base_bdevs_operational": 4, 00:11:48.001 "base_bdevs_list": [ 00:11:48.001 { 00:11:48.001 "name": "BaseBdev1", 00:11:48.001 "uuid": "9da78495-d042-476f-9311-421e400f93e7", 00:11:48.001 "is_configured": true, 00:11:48.001 "data_offset": 0, 00:11:48.001 "data_size": 65536 00:11:48.001 }, 00:11:48.001 { 00:11:48.001 "name": null, 00:11:48.001 "uuid": "ed71dd79-cdb6-48a8-8e34-b1b82f50b927", 00:11:48.001 "is_configured": false, 00:11:48.001 "data_offset": 0, 00:11:48.001 "data_size": 65536 00:11:48.001 }, 00:11:48.001 { 00:11:48.001 "name": null, 00:11:48.001 "uuid": "19af71c3-e61d-44e8-b1a1-89a0dd0b1b70", 00:11:48.001 "is_configured": false, 00:11:48.001 "data_offset": 0, 00:11:48.001 "data_size": 65536 00:11:48.001 }, 00:11:48.001 { 00:11:48.001 "name": "BaseBdev4", 00:11:48.001 "uuid": "cbe3354c-bf3a-4057-8d7c-2eb8815a5892", 00:11:48.001 "is_configured": true, 00:11:48.001 "data_offset": 0, 00:11:48.001 "data_size": 65536 00:11:48.001 } 00:11:48.001 ] 00:11:48.001 }' 00:11:48.001 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.001 11:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.259 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.259 11:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.259 11:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.259 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:48.259 11:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.517 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:48.517 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:48.517 11:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.517 11:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.517 [2024-11-20 11:21:31.406807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:48.517 11:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.517 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:48.517 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.517 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.517 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:48.517 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.517 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.517 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.517 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.517 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.517 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.517 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.517 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.517 11:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.517 11:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.517 11:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.517 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.517 "name": "Existed_Raid", 00:11:48.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.517 "strip_size_kb": 64, 00:11:48.517 "state": "configuring", 00:11:48.517 "raid_level": "raid0", 00:11:48.517 "superblock": false, 00:11:48.517 "num_base_bdevs": 4, 00:11:48.517 "num_base_bdevs_discovered": 3, 00:11:48.517 "num_base_bdevs_operational": 4, 00:11:48.517 "base_bdevs_list": [ 00:11:48.517 { 00:11:48.517 "name": "BaseBdev1", 00:11:48.517 "uuid": "9da78495-d042-476f-9311-421e400f93e7", 00:11:48.517 "is_configured": true, 00:11:48.517 "data_offset": 0, 00:11:48.517 "data_size": 65536 00:11:48.517 }, 00:11:48.517 { 00:11:48.517 "name": null, 00:11:48.517 "uuid": "ed71dd79-cdb6-48a8-8e34-b1b82f50b927", 00:11:48.517 "is_configured": false, 00:11:48.517 "data_offset": 0, 00:11:48.517 "data_size": 65536 00:11:48.517 }, 00:11:48.517 { 00:11:48.517 "name": "BaseBdev3", 00:11:48.517 "uuid": "19af71c3-e61d-44e8-b1a1-89a0dd0b1b70", 00:11:48.517 "is_configured": true, 00:11:48.517 "data_offset": 0, 00:11:48.517 "data_size": 65536 00:11:48.517 }, 00:11:48.517 { 00:11:48.517 "name": "BaseBdev4", 00:11:48.517 "uuid": "cbe3354c-bf3a-4057-8d7c-2eb8815a5892", 00:11:48.517 "is_configured": true, 00:11:48.517 "data_offset": 0, 00:11:48.517 "data_size": 65536 00:11:48.517 } 00:11:48.517 ] 00:11:48.517 }' 00:11:48.517 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.517 11:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.776 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:48.776 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.776 11:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.776 11:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.776 11:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.776 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:48.776 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:48.776 11:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.776 11:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.776 [2024-11-20 11:21:31.822152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:49.034 11:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.034 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:49.034 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.034 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.034 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:49.034 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.034 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.034 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.034 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.034 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.034 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.034 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.034 11:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.034 11:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.034 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.034 11:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.034 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.034 "name": "Existed_Raid", 00:11:49.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.034 "strip_size_kb": 64, 00:11:49.034 "state": "configuring", 00:11:49.034 "raid_level": "raid0", 00:11:49.034 "superblock": false, 00:11:49.034 "num_base_bdevs": 4, 00:11:49.034 "num_base_bdevs_discovered": 2, 00:11:49.035 "num_base_bdevs_operational": 4, 00:11:49.035 "base_bdevs_list": [ 00:11:49.035 { 00:11:49.035 "name": null, 00:11:49.035 "uuid": "9da78495-d042-476f-9311-421e400f93e7", 00:11:49.035 "is_configured": false, 00:11:49.035 "data_offset": 0, 00:11:49.035 "data_size": 65536 00:11:49.035 }, 00:11:49.035 { 00:11:49.035 "name": null, 00:11:49.035 "uuid": "ed71dd79-cdb6-48a8-8e34-b1b82f50b927", 00:11:49.035 "is_configured": false, 00:11:49.035 "data_offset": 0, 00:11:49.035 "data_size": 65536 00:11:49.035 }, 00:11:49.035 { 00:11:49.035 "name": "BaseBdev3", 00:11:49.035 "uuid": "19af71c3-e61d-44e8-b1a1-89a0dd0b1b70", 00:11:49.035 "is_configured": true, 00:11:49.035 "data_offset": 0, 00:11:49.035 "data_size": 65536 00:11:49.035 }, 00:11:49.035 { 00:11:49.035 "name": "BaseBdev4", 00:11:49.035 "uuid": "cbe3354c-bf3a-4057-8d7c-2eb8815a5892", 00:11:49.035 "is_configured": true, 00:11:49.035 "data_offset": 0, 00:11:49.035 "data_size": 65536 00:11:49.035 } 00:11:49.035 ] 00:11:49.035 }' 00:11:49.035 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.035 11:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.294 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.294 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.294 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.294 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:49.294 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.294 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:49.294 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:49.294 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.294 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.294 [2024-11-20 11:21:32.392037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:49.294 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.294 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:49.294 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.294 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.294 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:49.294 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.294 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.294 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.294 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.294 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.294 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.294 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.294 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.294 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.294 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.553 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.553 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.553 "name": "Existed_Raid", 00:11:49.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.553 "strip_size_kb": 64, 00:11:49.553 "state": "configuring", 00:11:49.553 "raid_level": "raid0", 00:11:49.553 "superblock": false, 00:11:49.553 "num_base_bdevs": 4, 00:11:49.553 "num_base_bdevs_discovered": 3, 00:11:49.553 "num_base_bdevs_operational": 4, 00:11:49.553 "base_bdevs_list": [ 00:11:49.553 { 00:11:49.553 "name": null, 00:11:49.553 "uuid": "9da78495-d042-476f-9311-421e400f93e7", 00:11:49.553 "is_configured": false, 00:11:49.553 "data_offset": 0, 00:11:49.553 "data_size": 65536 00:11:49.553 }, 00:11:49.553 { 00:11:49.553 "name": "BaseBdev2", 00:11:49.553 "uuid": "ed71dd79-cdb6-48a8-8e34-b1b82f50b927", 00:11:49.553 "is_configured": true, 00:11:49.553 "data_offset": 0, 00:11:49.553 "data_size": 65536 00:11:49.553 }, 00:11:49.553 { 00:11:49.553 "name": "BaseBdev3", 00:11:49.553 "uuid": "19af71c3-e61d-44e8-b1a1-89a0dd0b1b70", 00:11:49.553 "is_configured": true, 00:11:49.553 "data_offset": 0, 00:11:49.553 "data_size": 65536 00:11:49.553 }, 00:11:49.553 { 00:11:49.553 "name": "BaseBdev4", 00:11:49.553 "uuid": "cbe3354c-bf3a-4057-8d7c-2eb8815a5892", 00:11:49.553 "is_configured": true, 00:11:49.553 "data_offset": 0, 00:11:49.553 "data_size": 65536 00:11:49.553 } 00:11:49.553 ] 00:11:49.553 }' 00:11:49.553 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.553 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.829 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.829 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.829 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.829 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:49.829 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.829 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:49.830 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.830 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:49.830 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.830 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.830 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.203 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9da78495-d042-476f-9311-421e400f93e7 00:11:50.203 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.203 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.203 [2024-11-20 11:21:32.995785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:50.203 [2024-11-20 11:21:32.995877] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:50.203 [2024-11-20 11:21:32.995897] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:50.203 [2024-11-20 11:21:32.996314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:50.203 [2024-11-20 11:21:32.996572] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:50.203 [2024-11-20 11:21:32.996607] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:50.203 [2024-11-20 11:21:32.996962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.203 NewBaseBdev 00:11:50.203 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.203 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:50.203 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:50.203 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:50.203 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:50.203 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:50.203 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:50.203 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:50.203 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.203 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.203 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.203 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:50.203 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.203 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.203 [ 00:11:50.203 { 00:11:50.203 "name": "NewBaseBdev", 00:11:50.203 "aliases": [ 00:11:50.203 "9da78495-d042-476f-9311-421e400f93e7" 00:11:50.203 ], 00:11:50.203 "product_name": "Malloc disk", 00:11:50.203 "block_size": 512, 00:11:50.203 "num_blocks": 65536, 00:11:50.203 "uuid": "9da78495-d042-476f-9311-421e400f93e7", 00:11:50.203 "assigned_rate_limits": { 00:11:50.203 "rw_ios_per_sec": 0, 00:11:50.203 "rw_mbytes_per_sec": 0, 00:11:50.203 "r_mbytes_per_sec": 0, 00:11:50.203 "w_mbytes_per_sec": 0 00:11:50.203 }, 00:11:50.203 "claimed": true, 00:11:50.203 "claim_type": "exclusive_write", 00:11:50.203 "zoned": false, 00:11:50.203 "supported_io_types": { 00:11:50.203 "read": true, 00:11:50.203 "write": true, 00:11:50.203 "unmap": true, 00:11:50.203 "flush": true, 00:11:50.203 "reset": true, 00:11:50.203 "nvme_admin": false, 00:11:50.203 "nvme_io": false, 00:11:50.203 "nvme_io_md": false, 00:11:50.203 "write_zeroes": true, 00:11:50.203 "zcopy": true, 00:11:50.203 "get_zone_info": false, 00:11:50.203 "zone_management": false, 00:11:50.203 "zone_append": false, 00:11:50.203 "compare": false, 00:11:50.203 "compare_and_write": false, 00:11:50.203 "abort": true, 00:11:50.203 "seek_hole": false, 00:11:50.203 "seek_data": false, 00:11:50.203 "copy": true, 00:11:50.203 "nvme_iov_md": false 00:11:50.203 }, 00:11:50.203 "memory_domains": [ 00:11:50.203 { 00:11:50.203 "dma_device_id": "system", 00:11:50.203 "dma_device_type": 1 00:11:50.203 }, 00:11:50.203 { 00:11:50.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.203 "dma_device_type": 2 00:11:50.203 } 00:11:50.203 ], 00:11:50.203 "driver_specific": {} 00:11:50.203 } 00:11:50.203 ] 00:11:50.203 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.203 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:50.203 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:50.203 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.203 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.203 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:50.203 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.203 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.203 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.203 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.203 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.203 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.203 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.203 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.203 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.203 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.203 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.203 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.203 "name": "Existed_Raid", 00:11:50.203 "uuid": "7fce79ae-e067-413b-8720-037309c8b70c", 00:11:50.203 "strip_size_kb": 64, 00:11:50.203 "state": "online", 00:11:50.203 "raid_level": "raid0", 00:11:50.203 "superblock": false, 00:11:50.203 "num_base_bdevs": 4, 00:11:50.203 "num_base_bdevs_discovered": 4, 00:11:50.203 "num_base_bdevs_operational": 4, 00:11:50.203 "base_bdevs_list": [ 00:11:50.203 { 00:11:50.203 "name": "NewBaseBdev", 00:11:50.203 "uuid": "9da78495-d042-476f-9311-421e400f93e7", 00:11:50.203 "is_configured": true, 00:11:50.203 "data_offset": 0, 00:11:50.203 "data_size": 65536 00:11:50.203 }, 00:11:50.203 { 00:11:50.203 "name": "BaseBdev2", 00:11:50.203 "uuid": "ed71dd79-cdb6-48a8-8e34-b1b82f50b927", 00:11:50.203 "is_configured": true, 00:11:50.203 "data_offset": 0, 00:11:50.203 "data_size": 65536 00:11:50.203 }, 00:11:50.203 { 00:11:50.203 "name": "BaseBdev3", 00:11:50.203 "uuid": "19af71c3-e61d-44e8-b1a1-89a0dd0b1b70", 00:11:50.203 "is_configured": true, 00:11:50.203 "data_offset": 0, 00:11:50.203 "data_size": 65536 00:11:50.203 }, 00:11:50.203 { 00:11:50.203 "name": "BaseBdev4", 00:11:50.203 "uuid": "cbe3354c-bf3a-4057-8d7c-2eb8815a5892", 00:11:50.203 "is_configured": true, 00:11:50.203 "data_offset": 0, 00:11:50.203 "data_size": 65536 00:11:50.203 } 00:11:50.203 ] 00:11:50.203 }' 00:11:50.203 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.203 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.483 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:50.483 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:50.483 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:50.483 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:50.483 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:50.483 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:50.483 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:50.483 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:50.483 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.483 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.483 [2024-11-20 11:21:33.479593] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.483 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.483 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:50.483 "name": "Existed_Raid", 00:11:50.483 "aliases": [ 00:11:50.483 "7fce79ae-e067-413b-8720-037309c8b70c" 00:11:50.483 ], 00:11:50.483 "product_name": "Raid Volume", 00:11:50.483 "block_size": 512, 00:11:50.483 "num_blocks": 262144, 00:11:50.483 "uuid": "7fce79ae-e067-413b-8720-037309c8b70c", 00:11:50.483 "assigned_rate_limits": { 00:11:50.483 "rw_ios_per_sec": 0, 00:11:50.483 "rw_mbytes_per_sec": 0, 00:11:50.483 "r_mbytes_per_sec": 0, 00:11:50.483 "w_mbytes_per_sec": 0 00:11:50.483 }, 00:11:50.483 "claimed": false, 00:11:50.483 "zoned": false, 00:11:50.483 "supported_io_types": { 00:11:50.483 "read": true, 00:11:50.483 "write": true, 00:11:50.483 "unmap": true, 00:11:50.483 "flush": true, 00:11:50.483 "reset": true, 00:11:50.483 "nvme_admin": false, 00:11:50.483 "nvme_io": false, 00:11:50.483 "nvme_io_md": false, 00:11:50.483 "write_zeroes": true, 00:11:50.483 "zcopy": false, 00:11:50.483 "get_zone_info": false, 00:11:50.483 "zone_management": false, 00:11:50.483 "zone_append": false, 00:11:50.483 "compare": false, 00:11:50.483 "compare_and_write": false, 00:11:50.483 "abort": false, 00:11:50.483 "seek_hole": false, 00:11:50.483 "seek_data": false, 00:11:50.483 "copy": false, 00:11:50.483 "nvme_iov_md": false 00:11:50.483 }, 00:11:50.483 "memory_domains": [ 00:11:50.483 { 00:11:50.483 "dma_device_id": "system", 00:11:50.483 "dma_device_type": 1 00:11:50.483 }, 00:11:50.483 { 00:11:50.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.483 "dma_device_type": 2 00:11:50.483 }, 00:11:50.483 { 00:11:50.483 "dma_device_id": "system", 00:11:50.483 "dma_device_type": 1 00:11:50.483 }, 00:11:50.483 { 00:11:50.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.483 "dma_device_type": 2 00:11:50.483 }, 00:11:50.483 { 00:11:50.483 "dma_device_id": "system", 00:11:50.483 "dma_device_type": 1 00:11:50.483 }, 00:11:50.483 { 00:11:50.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.483 "dma_device_type": 2 00:11:50.483 }, 00:11:50.483 { 00:11:50.483 "dma_device_id": "system", 00:11:50.483 "dma_device_type": 1 00:11:50.483 }, 00:11:50.483 { 00:11:50.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.483 "dma_device_type": 2 00:11:50.483 } 00:11:50.483 ], 00:11:50.483 "driver_specific": { 00:11:50.483 "raid": { 00:11:50.483 "uuid": "7fce79ae-e067-413b-8720-037309c8b70c", 00:11:50.483 "strip_size_kb": 64, 00:11:50.483 "state": "online", 00:11:50.483 "raid_level": "raid0", 00:11:50.483 "superblock": false, 00:11:50.483 "num_base_bdevs": 4, 00:11:50.483 "num_base_bdevs_discovered": 4, 00:11:50.483 "num_base_bdevs_operational": 4, 00:11:50.483 "base_bdevs_list": [ 00:11:50.483 { 00:11:50.483 "name": "NewBaseBdev", 00:11:50.483 "uuid": "9da78495-d042-476f-9311-421e400f93e7", 00:11:50.483 "is_configured": true, 00:11:50.483 "data_offset": 0, 00:11:50.483 "data_size": 65536 00:11:50.483 }, 00:11:50.483 { 00:11:50.483 "name": "BaseBdev2", 00:11:50.483 "uuid": "ed71dd79-cdb6-48a8-8e34-b1b82f50b927", 00:11:50.483 "is_configured": true, 00:11:50.483 "data_offset": 0, 00:11:50.483 "data_size": 65536 00:11:50.483 }, 00:11:50.483 { 00:11:50.483 "name": "BaseBdev3", 00:11:50.483 "uuid": "19af71c3-e61d-44e8-b1a1-89a0dd0b1b70", 00:11:50.483 "is_configured": true, 00:11:50.483 "data_offset": 0, 00:11:50.483 "data_size": 65536 00:11:50.483 }, 00:11:50.483 { 00:11:50.483 "name": "BaseBdev4", 00:11:50.483 "uuid": "cbe3354c-bf3a-4057-8d7c-2eb8815a5892", 00:11:50.483 "is_configured": true, 00:11:50.483 "data_offset": 0, 00:11:50.483 "data_size": 65536 00:11:50.483 } 00:11:50.483 ] 00:11:50.483 } 00:11:50.483 } 00:11:50.483 }' 00:11:50.484 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:50.484 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:50.484 BaseBdev2 00:11:50.484 BaseBdev3 00:11:50.484 BaseBdev4' 00:11:50.484 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.743 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:50.743 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.743 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.743 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:50.743 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.743 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.743 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.743 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.743 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.743 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.743 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:50.743 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.743 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.743 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.743 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.744 [2024-11-20 11:21:33.774662] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:50.744 [2024-11-20 11:21:33.774711] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:50.744 [2024-11-20 11:21:33.774830] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:50.744 [2024-11-20 11:21:33.774924] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:50.744 [2024-11-20 11:21:33.774939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69513 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69513 ']' 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69513 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69513 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:50.744 killing process with pid 69513 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69513' 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69513 00:11:50.744 [2024-11-20 11:21:33.815978] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:50.744 11:21:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69513 00:11:51.311 [2024-11-20 11:21:34.233211] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:52.690 11:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:52.690 00:11:52.690 real 0m11.836s 00:11:52.690 user 0m18.732s 00:11:52.690 sys 0m1.881s 00:11:52.690 11:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.690 11:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.690 ************************************ 00:11:52.690 END TEST raid_state_function_test 00:11:52.690 ************************************ 00:11:52.690 11:21:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:11:52.690 11:21:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:52.690 11:21:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.690 11:21:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:52.690 ************************************ 00:11:52.690 START TEST raid_state_function_test_sb 00:11:52.690 ************************************ 00:11:52.690 11:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:11:52.690 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:52.690 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:52.690 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:52.690 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:52.690 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70190 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70190' 00:11:52.691 Process raid pid: 70190 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70190 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70190 ']' 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.691 11:21:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.691 [2024-11-20 11:21:35.651164] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:11:52.691 [2024-11-20 11:21:35.651347] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.950 [2024-11-20 11:21:35.816839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.950 [2024-11-20 11:21:35.947704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.211 [2024-11-20 11:21:36.184054] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:53.211 [2024-11-20 11:21:36.184112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:53.471 11:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:53.471 11:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:53.471 11:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:53.471 11:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.471 11:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.471 [2024-11-20 11:21:36.559173] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:53.471 [2024-11-20 11:21:36.559228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:53.471 [2024-11-20 11:21:36.559241] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:53.471 [2024-11-20 11:21:36.559253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:53.471 [2024-11-20 11:21:36.559261] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:53.471 [2024-11-20 11:21:36.559271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:53.471 [2024-11-20 11:21:36.559278] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:53.471 [2024-11-20 11:21:36.559300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:53.471 11:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.471 11:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:53.471 11:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.471 11:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.471 11:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:53.471 11:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.471 11:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.471 11:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.471 11:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.471 11:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.471 11:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.471 11:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.471 11:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.471 11:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.471 11:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.731 11:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.731 11:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.731 "name": "Existed_Raid", 00:11:53.731 "uuid": "a74864e3-da36-435f-ab60-7700ba9c8380", 00:11:53.731 "strip_size_kb": 64, 00:11:53.731 "state": "configuring", 00:11:53.731 "raid_level": "raid0", 00:11:53.731 "superblock": true, 00:11:53.731 "num_base_bdevs": 4, 00:11:53.731 "num_base_bdevs_discovered": 0, 00:11:53.731 "num_base_bdevs_operational": 4, 00:11:53.731 "base_bdevs_list": [ 00:11:53.731 { 00:11:53.731 "name": "BaseBdev1", 00:11:53.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.731 "is_configured": false, 00:11:53.731 "data_offset": 0, 00:11:53.731 "data_size": 0 00:11:53.731 }, 00:11:53.731 { 00:11:53.731 "name": "BaseBdev2", 00:11:53.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.731 "is_configured": false, 00:11:53.731 "data_offset": 0, 00:11:53.731 "data_size": 0 00:11:53.731 }, 00:11:53.731 { 00:11:53.731 "name": "BaseBdev3", 00:11:53.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.731 "is_configured": false, 00:11:53.731 "data_offset": 0, 00:11:53.731 "data_size": 0 00:11:53.731 }, 00:11:53.731 { 00:11:53.731 "name": "BaseBdev4", 00:11:53.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.731 "is_configured": false, 00:11:53.731 "data_offset": 0, 00:11:53.731 "data_size": 0 00:11:53.731 } 00:11:53.731 ] 00:11:53.731 }' 00:11:53.731 11:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.731 11:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.990 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:53.990 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.990 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.990 [2024-11-20 11:21:37.022361] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:53.990 [2024-11-20 11:21:37.022419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:53.990 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.990 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:53.990 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.990 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.990 [2024-11-20 11:21:37.030353] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:53.990 [2024-11-20 11:21:37.030411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:53.990 [2024-11-20 11:21:37.030425] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:53.990 [2024-11-20 11:21:37.030437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:53.990 [2024-11-20 11:21:37.030444] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:53.990 [2024-11-20 11:21:37.030469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:53.990 [2024-11-20 11:21:37.030478] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:53.990 [2024-11-20 11:21:37.030489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:53.990 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.990 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:53.990 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.990 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.990 [2024-11-20 11:21:37.083017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:53.990 BaseBdev1 00:11:53.990 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.990 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:53.990 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:53.990 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:53.990 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:53.990 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:53.990 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:53.990 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:53.990 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.990 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.990 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.990 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:53.990 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.990 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.990 [ 00:11:53.990 { 00:11:53.990 "name": "BaseBdev1", 00:11:53.990 "aliases": [ 00:11:54.249 "827acc4f-5326-4900-baf3-6e20ff89cdf9" 00:11:54.249 ], 00:11:54.249 "product_name": "Malloc disk", 00:11:54.249 "block_size": 512, 00:11:54.249 "num_blocks": 65536, 00:11:54.249 "uuid": "827acc4f-5326-4900-baf3-6e20ff89cdf9", 00:11:54.249 "assigned_rate_limits": { 00:11:54.249 "rw_ios_per_sec": 0, 00:11:54.249 "rw_mbytes_per_sec": 0, 00:11:54.249 "r_mbytes_per_sec": 0, 00:11:54.249 "w_mbytes_per_sec": 0 00:11:54.249 }, 00:11:54.249 "claimed": true, 00:11:54.249 "claim_type": "exclusive_write", 00:11:54.249 "zoned": false, 00:11:54.249 "supported_io_types": { 00:11:54.249 "read": true, 00:11:54.249 "write": true, 00:11:54.249 "unmap": true, 00:11:54.249 "flush": true, 00:11:54.249 "reset": true, 00:11:54.249 "nvme_admin": false, 00:11:54.249 "nvme_io": false, 00:11:54.249 "nvme_io_md": false, 00:11:54.249 "write_zeroes": true, 00:11:54.249 "zcopy": true, 00:11:54.249 "get_zone_info": false, 00:11:54.249 "zone_management": false, 00:11:54.249 "zone_append": false, 00:11:54.249 "compare": false, 00:11:54.249 "compare_and_write": false, 00:11:54.249 "abort": true, 00:11:54.249 "seek_hole": false, 00:11:54.249 "seek_data": false, 00:11:54.249 "copy": true, 00:11:54.249 "nvme_iov_md": false 00:11:54.249 }, 00:11:54.249 "memory_domains": [ 00:11:54.249 { 00:11:54.249 "dma_device_id": "system", 00:11:54.249 "dma_device_type": 1 00:11:54.249 }, 00:11:54.249 { 00:11:54.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.249 "dma_device_type": 2 00:11:54.249 } 00:11:54.249 ], 00:11:54.249 "driver_specific": {} 00:11:54.249 } 00:11:54.249 ] 00:11:54.249 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.249 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:54.249 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:54.249 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.249 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.249 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:54.249 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.249 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.249 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.249 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.249 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.249 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.249 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.249 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.249 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.249 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.249 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.249 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.249 "name": "Existed_Raid", 00:11:54.249 "uuid": "d805e742-234c-4eab-8dd5-a3b9dc84cf14", 00:11:54.249 "strip_size_kb": 64, 00:11:54.249 "state": "configuring", 00:11:54.249 "raid_level": "raid0", 00:11:54.249 "superblock": true, 00:11:54.249 "num_base_bdevs": 4, 00:11:54.249 "num_base_bdevs_discovered": 1, 00:11:54.249 "num_base_bdevs_operational": 4, 00:11:54.249 "base_bdevs_list": [ 00:11:54.249 { 00:11:54.249 "name": "BaseBdev1", 00:11:54.249 "uuid": "827acc4f-5326-4900-baf3-6e20ff89cdf9", 00:11:54.249 "is_configured": true, 00:11:54.249 "data_offset": 2048, 00:11:54.249 "data_size": 63488 00:11:54.249 }, 00:11:54.249 { 00:11:54.249 "name": "BaseBdev2", 00:11:54.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.249 "is_configured": false, 00:11:54.249 "data_offset": 0, 00:11:54.250 "data_size": 0 00:11:54.250 }, 00:11:54.250 { 00:11:54.250 "name": "BaseBdev3", 00:11:54.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.250 "is_configured": false, 00:11:54.250 "data_offset": 0, 00:11:54.250 "data_size": 0 00:11:54.250 }, 00:11:54.250 { 00:11:54.250 "name": "BaseBdev4", 00:11:54.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.250 "is_configured": false, 00:11:54.250 "data_offset": 0, 00:11:54.250 "data_size": 0 00:11:54.250 } 00:11:54.250 ] 00:11:54.250 }' 00:11:54.250 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.250 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.509 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:54.509 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.509 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.509 [2024-11-20 11:21:37.594634] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:54.509 [2024-11-20 11:21:37.594706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:54.509 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.509 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:54.509 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.509 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.509 [2024-11-20 11:21:37.606765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:54.509 [2024-11-20 11:21:37.608977] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:54.509 [2024-11-20 11:21:37.609026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:54.509 [2024-11-20 11:21:37.609038] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:54.509 [2024-11-20 11:21:37.609052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:54.510 [2024-11-20 11:21:37.609060] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:54.510 [2024-11-20 11:21:37.609070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:54.510 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.510 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:54.510 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:54.510 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:54.510 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.510 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.510 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:54.510 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.510 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.510 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.510 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.510 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.510 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.510 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.510 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.510 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.510 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.768 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.768 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.768 "name": "Existed_Raid", 00:11:54.768 "uuid": "25dbe9b0-f301-499c-9f65-12672d71e0e1", 00:11:54.768 "strip_size_kb": 64, 00:11:54.768 "state": "configuring", 00:11:54.768 "raid_level": "raid0", 00:11:54.768 "superblock": true, 00:11:54.768 "num_base_bdevs": 4, 00:11:54.768 "num_base_bdevs_discovered": 1, 00:11:54.768 "num_base_bdevs_operational": 4, 00:11:54.768 "base_bdevs_list": [ 00:11:54.768 { 00:11:54.768 "name": "BaseBdev1", 00:11:54.768 "uuid": "827acc4f-5326-4900-baf3-6e20ff89cdf9", 00:11:54.768 "is_configured": true, 00:11:54.768 "data_offset": 2048, 00:11:54.768 "data_size": 63488 00:11:54.768 }, 00:11:54.768 { 00:11:54.768 "name": "BaseBdev2", 00:11:54.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.768 "is_configured": false, 00:11:54.768 "data_offset": 0, 00:11:54.768 "data_size": 0 00:11:54.768 }, 00:11:54.768 { 00:11:54.768 "name": "BaseBdev3", 00:11:54.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.768 "is_configured": false, 00:11:54.768 "data_offset": 0, 00:11:54.768 "data_size": 0 00:11:54.768 }, 00:11:54.768 { 00:11:54.768 "name": "BaseBdev4", 00:11:54.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.768 "is_configured": false, 00:11:54.768 "data_offset": 0, 00:11:54.768 "data_size": 0 00:11:54.768 } 00:11:54.768 ] 00:11:54.768 }' 00:11:54.768 11:21:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.769 11:21:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.028 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:55.028 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.028 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.028 [2024-11-20 11:21:38.115230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:55.028 BaseBdev2 00:11:55.028 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.028 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:55.028 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:55.028 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:55.028 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:55.028 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:55.028 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:55.028 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:55.028 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.028 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.028 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.028 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:55.028 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.028 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.028 [ 00:11:55.028 { 00:11:55.028 "name": "BaseBdev2", 00:11:55.028 "aliases": [ 00:11:55.028 "3eb182d9-a52e-4e68-a401-da024a5934b3" 00:11:55.028 ], 00:11:55.028 "product_name": "Malloc disk", 00:11:55.028 "block_size": 512, 00:11:55.028 "num_blocks": 65536, 00:11:55.028 "uuid": "3eb182d9-a52e-4e68-a401-da024a5934b3", 00:11:55.029 "assigned_rate_limits": { 00:11:55.029 "rw_ios_per_sec": 0, 00:11:55.289 "rw_mbytes_per_sec": 0, 00:11:55.289 "r_mbytes_per_sec": 0, 00:11:55.289 "w_mbytes_per_sec": 0 00:11:55.289 }, 00:11:55.289 "claimed": true, 00:11:55.289 "claim_type": "exclusive_write", 00:11:55.289 "zoned": false, 00:11:55.289 "supported_io_types": { 00:11:55.289 "read": true, 00:11:55.289 "write": true, 00:11:55.289 "unmap": true, 00:11:55.289 "flush": true, 00:11:55.289 "reset": true, 00:11:55.289 "nvme_admin": false, 00:11:55.289 "nvme_io": false, 00:11:55.289 "nvme_io_md": false, 00:11:55.289 "write_zeroes": true, 00:11:55.289 "zcopy": true, 00:11:55.289 "get_zone_info": false, 00:11:55.289 "zone_management": false, 00:11:55.289 "zone_append": false, 00:11:55.289 "compare": false, 00:11:55.289 "compare_and_write": false, 00:11:55.289 "abort": true, 00:11:55.289 "seek_hole": false, 00:11:55.289 "seek_data": false, 00:11:55.289 "copy": true, 00:11:55.289 "nvme_iov_md": false 00:11:55.289 }, 00:11:55.289 "memory_domains": [ 00:11:55.289 { 00:11:55.289 "dma_device_id": "system", 00:11:55.289 "dma_device_type": 1 00:11:55.289 }, 00:11:55.289 { 00:11:55.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.289 "dma_device_type": 2 00:11:55.289 } 00:11:55.289 ], 00:11:55.289 "driver_specific": {} 00:11:55.289 } 00:11:55.289 ] 00:11:55.289 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.289 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:55.289 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:55.289 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:55.289 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:55.289 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.289 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.289 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:55.289 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.289 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.289 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.289 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.289 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.289 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.289 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.289 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.289 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.289 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.289 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.289 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.289 "name": "Existed_Raid", 00:11:55.289 "uuid": "25dbe9b0-f301-499c-9f65-12672d71e0e1", 00:11:55.289 "strip_size_kb": 64, 00:11:55.289 "state": "configuring", 00:11:55.289 "raid_level": "raid0", 00:11:55.289 "superblock": true, 00:11:55.289 "num_base_bdevs": 4, 00:11:55.289 "num_base_bdevs_discovered": 2, 00:11:55.289 "num_base_bdevs_operational": 4, 00:11:55.289 "base_bdevs_list": [ 00:11:55.289 { 00:11:55.289 "name": "BaseBdev1", 00:11:55.289 "uuid": "827acc4f-5326-4900-baf3-6e20ff89cdf9", 00:11:55.290 "is_configured": true, 00:11:55.290 "data_offset": 2048, 00:11:55.290 "data_size": 63488 00:11:55.290 }, 00:11:55.290 { 00:11:55.290 "name": "BaseBdev2", 00:11:55.290 "uuid": "3eb182d9-a52e-4e68-a401-da024a5934b3", 00:11:55.290 "is_configured": true, 00:11:55.290 "data_offset": 2048, 00:11:55.290 "data_size": 63488 00:11:55.290 }, 00:11:55.290 { 00:11:55.290 "name": "BaseBdev3", 00:11:55.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.290 "is_configured": false, 00:11:55.290 "data_offset": 0, 00:11:55.290 "data_size": 0 00:11:55.290 }, 00:11:55.290 { 00:11:55.290 "name": "BaseBdev4", 00:11:55.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.290 "is_configured": false, 00:11:55.290 "data_offset": 0, 00:11:55.290 "data_size": 0 00:11:55.290 } 00:11:55.290 ] 00:11:55.290 }' 00:11:55.290 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.290 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.550 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:55.550 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.550 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.550 [2024-11-20 11:21:38.658998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:55.550 BaseBdev3 00:11:55.550 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.550 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:55.550 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:55.550 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:55.550 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:55.550 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:55.550 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:55.550 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:55.550 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.550 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.810 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.810 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:55.810 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.810 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.810 [ 00:11:55.810 { 00:11:55.810 "name": "BaseBdev3", 00:11:55.810 "aliases": [ 00:11:55.810 "e141fa3a-fe0d-46a7-8210-9448e50e4e34" 00:11:55.810 ], 00:11:55.810 "product_name": "Malloc disk", 00:11:55.810 "block_size": 512, 00:11:55.810 "num_blocks": 65536, 00:11:55.810 "uuid": "e141fa3a-fe0d-46a7-8210-9448e50e4e34", 00:11:55.810 "assigned_rate_limits": { 00:11:55.810 "rw_ios_per_sec": 0, 00:11:55.810 "rw_mbytes_per_sec": 0, 00:11:55.810 "r_mbytes_per_sec": 0, 00:11:55.810 "w_mbytes_per_sec": 0 00:11:55.810 }, 00:11:55.810 "claimed": true, 00:11:55.810 "claim_type": "exclusive_write", 00:11:55.810 "zoned": false, 00:11:55.810 "supported_io_types": { 00:11:55.810 "read": true, 00:11:55.810 "write": true, 00:11:55.810 "unmap": true, 00:11:55.810 "flush": true, 00:11:55.810 "reset": true, 00:11:55.810 "nvme_admin": false, 00:11:55.810 "nvme_io": false, 00:11:55.810 "nvme_io_md": false, 00:11:55.810 "write_zeroes": true, 00:11:55.810 "zcopy": true, 00:11:55.810 "get_zone_info": false, 00:11:55.810 "zone_management": false, 00:11:55.810 "zone_append": false, 00:11:55.810 "compare": false, 00:11:55.810 "compare_and_write": false, 00:11:55.810 "abort": true, 00:11:55.810 "seek_hole": false, 00:11:55.810 "seek_data": false, 00:11:55.810 "copy": true, 00:11:55.810 "nvme_iov_md": false 00:11:55.810 }, 00:11:55.810 "memory_domains": [ 00:11:55.810 { 00:11:55.810 "dma_device_id": "system", 00:11:55.810 "dma_device_type": 1 00:11:55.810 }, 00:11:55.810 { 00:11:55.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.810 "dma_device_type": 2 00:11:55.810 } 00:11:55.810 ], 00:11:55.810 "driver_specific": {} 00:11:55.810 } 00:11:55.810 ] 00:11:55.810 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.810 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:55.810 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:55.810 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:55.810 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:55.810 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.810 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.810 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:55.810 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.810 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.810 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.810 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.810 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.811 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.811 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.811 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.811 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.811 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.811 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.811 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.811 "name": "Existed_Raid", 00:11:55.811 "uuid": "25dbe9b0-f301-499c-9f65-12672d71e0e1", 00:11:55.811 "strip_size_kb": 64, 00:11:55.811 "state": "configuring", 00:11:55.811 "raid_level": "raid0", 00:11:55.811 "superblock": true, 00:11:55.811 "num_base_bdevs": 4, 00:11:55.811 "num_base_bdevs_discovered": 3, 00:11:55.811 "num_base_bdevs_operational": 4, 00:11:55.811 "base_bdevs_list": [ 00:11:55.811 { 00:11:55.811 "name": "BaseBdev1", 00:11:55.811 "uuid": "827acc4f-5326-4900-baf3-6e20ff89cdf9", 00:11:55.811 "is_configured": true, 00:11:55.811 "data_offset": 2048, 00:11:55.811 "data_size": 63488 00:11:55.811 }, 00:11:55.811 { 00:11:55.811 "name": "BaseBdev2", 00:11:55.811 "uuid": "3eb182d9-a52e-4e68-a401-da024a5934b3", 00:11:55.811 "is_configured": true, 00:11:55.811 "data_offset": 2048, 00:11:55.811 "data_size": 63488 00:11:55.811 }, 00:11:55.811 { 00:11:55.811 "name": "BaseBdev3", 00:11:55.811 "uuid": "e141fa3a-fe0d-46a7-8210-9448e50e4e34", 00:11:55.811 "is_configured": true, 00:11:55.811 "data_offset": 2048, 00:11:55.811 "data_size": 63488 00:11:55.811 }, 00:11:55.811 { 00:11:55.811 "name": "BaseBdev4", 00:11:55.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.811 "is_configured": false, 00:11:55.811 "data_offset": 0, 00:11:55.811 "data_size": 0 00:11:55.811 } 00:11:55.811 ] 00:11:55.811 }' 00:11:55.811 11:21:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.811 11:21:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.071 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:56.071 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.071 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.071 [2024-11-20 11:21:39.177881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:56.071 [2024-11-20 11:21:39.178237] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:56.071 [2024-11-20 11:21:39.178260] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:56.071 [2024-11-20 11:21:39.178600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:56.071 [2024-11-20 11:21:39.178818] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:56.071 [2024-11-20 11:21:39.178845] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:56.071 [2024-11-20 11:21:39.179061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.071 BaseBdev4 00:11:56.071 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.071 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:56.071 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:56.071 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:56.071 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:56.071 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:56.071 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:56.071 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:56.071 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.071 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.331 [ 00:11:56.331 { 00:11:56.331 "name": "BaseBdev4", 00:11:56.331 "aliases": [ 00:11:56.331 "894e9855-9cfd-489c-b2e8-dcb6023f585f" 00:11:56.331 ], 00:11:56.331 "product_name": "Malloc disk", 00:11:56.331 "block_size": 512, 00:11:56.331 "num_blocks": 65536, 00:11:56.331 "uuid": "894e9855-9cfd-489c-b2e8-dcb6023f585f", 00:11:56.331 "assigned_rate_limits": { 00:11:56.331 "rw_ios_per_sec": 0, 00:11:56.331 "rw_mbytes_per_sec": 0, 00:11:56.331 "r_mbytes_per_sec": 0, 00:11:56.331 "w_mbytes_per_sec": 0 00:11:56.331 }, 00:11:56.331 "claimed": true, 00:11:56.331 "claim_type": "exclusive_write", 00:11:56.331 "zoned": false, 00:11:56.331 "supported_io_types": { 00:11:56.331 "read": true, 00:11:56.331 "write": true, 00:11:56.331 "unmap": true, 00:11:56.331 "flush": true, 00:11:56.331 "reset": true, 00:11:56.331 "nvme_admin": false, 00:11:56.331 "nvme_io": false, 00:11:56.331 "nvme_io_md": false, 00:11:56.331 "write_zeroes": true, 00:11:56.331 "zcopy": true, 00:11:56.331 "get_zone_info": false, 00:11:56.331 "zone_management": false, 00:11:56.331 "zone_append": false, 00:11:56.331 "compare": false, 00:11:56.331 "compare_and_write": false, 00:11:56.331 "abort": true, 00:11:56.331 "seek_hole": false, 00:11:56.331 "seek_data": false, 00:11:56.331 "copy": true, 00:11:56.331 "nvme_iov_md": false 00:11:56.331 }, 00:11:56.331 "memory_domains": [ 00:11:56.331 { 00:11:56.331 "dma_device_id": "system", 00:11:56.331 "dma_device_type": 1 00:11:56.331 }, 00:11:56.331 { 00:11:56.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.331 "dma_device_type": 2 00:11:56.331 } 00:11:56.331 ], 00:11:56.331 "driver_specific": {} 00:11:56.331 } 00:11:56.331 ] 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.331 "name": "Existed_Raid", 00:11:56.331 "uuid": "25dbe9b0-f301-499c-9f65-12672d71e0e1", 00:11:56.331 "strip_size_kb": 64, 00:11:56.331 "state": "online", 00:11:56.331 "raid_level": "raid0", 00:11:56.331 "superblock": true, 00:11:56.331 "num_base_bdevs": 4, 00:11:56.331 "num_base_bdevs_discovered": 4, 00:11:56.331 "num_base_bdevs_operational": 4, 00:11:56.331 "base_bdevs_list": [ 00:11:56.331 { 00:11:56.331 "name": "BaseBdev1", 00:11:56.331 "uuid": "827acc4f-5326-4900-baf3-6e20ff89cdf9", 00:11:56.331 "is_configured": true, 00:11:56.331 "data_offset": 2048, 00:11:56.331 "data_size": 63488 00:11:56.331 }, 00:11:56.331 { 00:11:56.331 "name": "BaseBdev2", 00:11:56.331 "uuid": "3eb182d9-a52e-4e68-a401-da024a5934b3", 00:11:56.331 "is_configured": true, 00:11:56.331 "data_offset": 2048, 00:11:56.331 "data_size": 63488 00:11:56.331 }, 00:11:56.331 { 00:11:56.331 "name": "BaseBdev3", 00:11:56.331 "uuid": "e141fa3a-fe0d-46a7-8210-9448e50e4e34", 00:11:56.331 "is_configured": true, 00:11:56.331 "data_offset": 2048, 00:11:56.331 "data_size": 63488 00:11:56.331 }, 00:11:56.331 { 00:11:56.331 "name": "BaseBdev4", 00:11:56.331 "uuid": "894e9855-9cfd-489c-b2e8-dcb6023f585f", 00:11:56.331 "is_configured": true, 00:11:56.331 "data_offset": 2048, 00:11:56.331 "data_size": 63488 00:11:56.331 } 00:11:56.331 ] 00:11:56.331 }' 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.331 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.591 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:56.591 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:56.591 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:56.591 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:56.591 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:56.591 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:56.591 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:56.591 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.591 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.591 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:56.591 [2024-11-20 11:21:39.689525] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:56.591 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.872 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:56.872 "name": "Existed_Raid", 00:11:56.872 "aliases": [ 00:11:56.872 "25dbe9b0-f301-499c-9f65-12672d71e0e1" 00:11:56.872 ], 00:11:56.872 "product_name": "Raid Volume", 00:11:56.872 "block_size": 512, 00:11:56.872 "num_blocks": 253952, 00:11:56.872 "uuid": "25dbe9b0-f301-499c-9f65-12672d71e0e1", 00:11:56.872 "assigned_rate_limits": { 00:11:56.872 "rw_ios_per_sec": 0, 00:11:56.872 "rw_mbytes_per_sec": 0, 00:11:56.872 "r_mbytes_per_sec": 0, 00:11:56.872 "w_mbytes_per_sec": 0 00:11:56.872 }, 00:11:56.872 "claimed": false, 00:11:56.872 "zoned": false, 00:11:56.872 "supported_io_types": { 00:11:56.872 "read": true, 00:11:56.872 "write": true, 00:11:56.872 "unmap": true, 00:11:56.872 "flush": true, 00:11:56.872 "reset": true, 00:11:56.872 "nvme_admin": false, 00:11:56.872 "nvme_io": false, 00:11:56.872 "nvme_io_md": false, 00:11:56.872 "write_zeroes": true, 00:11:56.872 "zcopy": false, 00:11:56.872 "get_zone_info": false, 00:11:56.872 "zone_management": false, 00:11:56.872 "zone_append": false, 00:11:56.872 "compare": false, 00:11:56.872 "compare_and_write": false, 00:11:56.872 "abort": false, 00:11:56.872 "seek_hole": false, 00:11:56.872 "seek_data": false, 00:11:56.872 "copy": false, 00:11:56.872 "nvme_iov_md": false 00:11:56.872 }, 00:11:56.872 "memory_domains": [ 00:11:56.872 { 00:11:56.872 "dma_device_id": "system", 00:11:56.872 "dma_device_type": 1 00:11:56.872 }, 00:11:56.872 { 00:11:56.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.872 "dma_device_type": 2 00:11:56.872 }, 00:11:56.872 { 00:11:56.872 "dma_device_id": "system", 00:11:56.872 "dma_device_type": 1 00:11:56.872 }, 00:11:56.872 { 00:11:56.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.872 "dma_device_type": 2 00:11:56.872 }, 00:11:56.872 { 00:11:56.872 "dma_device_id": "system", 00:11:56.872 "dma_device_type": 1 00:11:56.872 }, 00:11:56.872 { 00:11:56.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.872 "dma_device_type": 2 00:11:56.872 }, 00:11:56.872 { 00:11:56.872 "dma_device_id": "system", 00:11:56.872 "dma_device_type": 1 00:11:56.872 }, 00:11:56.872 { 00:11:56.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.872 "dma_device_type": 2 00:11:56.872 } 00:11:56.872 ], 00:11:56.872 "driver_specific": { 00:11:56.872 "raid": { 00:11:56.872 "uuid": "25dbe9b0-f301-499c-9f65-12672d71e0e1", 00:11:56.872 "strip_size_kb": 64, 00:11:56.872 "state": "online", 00:11:56.872 "raid_level": "raid0", 00:11:56.872 "superblock": true, 00:11:56.872 "num_base_bdevs": 4, 00:11:56.872 "num_base_bdevs_discovered": 4, 00:11:56.872 "num_base_bdevs_operational": 4, 00:11:56.872 "base_bdevs_list": [ 00:11:56.872 { 00:11:56.872 "name": "BaseBdev1", 00:11:56.872 "uuid": "827acc4f-5326-4900-baf3-6e20ff89cdf9", 00:11:56.872 "is_configured": true, 00:11:56.872 "data_offset": 2048, 00:11:56.872 "data_size": 63488 00:11:56.872 }, 00:11:56.872 { 00:11:56.872 "name": "BaseBdev2", 00:11:56.872 "uuid": "3eb182d9-a52e-4e68-a401-da024a5934b3", 00:11:56.872 "is_configured": true, 00:11:56.872 "data_offset": 2048, 00:11:56.872 "data_size": 63488 00:11:56.872 }, 00:11:56.872 { 00:11:56.872 "name": "BaseBdev3", 00:11:56.872 "uuid": "e141fa3a-fe0d-46a7-8210-9448e50e4e34", 00:11:56.872 "is_configured": true, 00:11:56.872 "data_offset": 2048, 00:11:56.872 "data_size": 63488 00:11:56.872 }, 00:11:56.872 { 00:11:56.872 "name": "BaseBdev4", 00:11:56.872 "uuid": "894e9855-9cfd-489c-b2e8-dcb6023f585f", 00:11:56.872 "is_configured": true, 00:11:56.872 "data_offset": 2048, 00:11:56.872 "data_size": 63488 00:11:56.872 } 00:11:56.872 ] 00:11:56.872 } 00:11:56.872 } 00:11:56.872 }' 00:11:56.872 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:56.872 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:56.872 BaseBdev2 00:11:56.872 BaseBdev3 00:11:56.872 BaseBdev4' 00:11:56.872 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.872 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:56.872 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.872 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:56.872 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.872 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.872 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.872 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.872 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.872 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.872 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.872 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:56.872 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.872 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.873 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.873 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.873 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.873 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.873 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.873 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:56.873 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.873 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.873 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.873 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.873 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.873 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.873 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.873 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.873 11:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:56.873 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.873 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.133 11:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.133 [2024-11-20 11:21:40.012698] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:57.133 [2024-11-20 11:21:40.012738] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.133 [2024-11-20 11:21:40.012799] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.133 "name": "Existed_Raid", 00:11:57.133 "uuid": "25dbe9b0-f301-499c-9f65-12672d71e0e1", 00:11:57.133 "strip_size_kb": 64, 00:11:57.133 "state": "offline", 00:11:57.133 "raid_level": "raid0", 00:11:57.133 "superblock": true, 00:11:57.133 "num_base_bdevs": 4, 00:11:57.133 "num_base_bdevs_discovered": 3, 00:11:57.133 "num_base_bdevs_operational": 3, 00:11:57.133 "base_bdevs_list": [ 00:11:57.133 { 00:11:57.133 "name": null, 00:11:57.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.133 "is_configured": false, 00:11:57.133 "data_offset": 0, 00:11:57.133 "data_size": 63488 00:11:57.133 }, 00:11:57.133 { 00:11:57.133 "name": "BaseBdev2", 00:11:57.133 "uuid": "3eb182d9-a52e-4e68-a401-da024a5934b3", 00:11:57.133 "is_configured": true, 00:11:57.133 "data_offset": 2048, 00:11:57.133 "data_size": 63488 00:11:57.133 }, 00:11:57.133 { 00:11:57.133 "name": "BaseBdev3", 00:11:57.133 "uuid": "e141fa3a-fe0d-46a7-8210-9448e50e4e34", 00:11:57.133 "is_configured": true, 00:11:57.133 "data_offset": 2048, 00:11:57.133 "data_size": 63488 00:11:57.133 }, 00:11:57.133 { 00:11:57.133 "name": "BaseBdev4", 00:11:57.133 "uuid": "894e9855-9cfd-489c-b2e8-dcb6023f585f", 00:11:57.133 "is_configured": true, 00:11:57.133 "data_offset": 2048, 00:11:57.133 "data_size": 63488 00:11:57.133 } 00:11:57.133 ] 00:11:57.133 }' 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.133 11:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.703 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:57.703 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:57.703 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:57.703 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.703 11:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.703 11:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.703 11:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.703 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:57.703 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:57.703 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:57.703 11:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.703 11:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.703 [2024-11-20 11:21:40.614046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:57.703 11:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.703 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:57.703 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:57.703 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.703 11:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.703 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:57.703 11:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.703 11:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.703 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:57.703 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:57.703 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:57.704 11:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.704 11:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.704 [2024-11-20 11:21:40.777618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:57.964 11:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.964 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:57.964 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:57.964 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.964 11:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.964 11:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.964 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:57.964 11:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.964 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:57.964 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:57.964 11:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:57.964 11:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.964 11:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.964 [2024-11-20 11:21:40.945170] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:57.964 [2024-11-20 11:21:40.945241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:57.964 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.964 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:57.964 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:57.964 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.964 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.964 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.964 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.225 BaseBdev2 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.225 [ 00:11:58.225 { 00:11:58.225 "name": "BaseBdev2", 00:11:58.225 "aliases": [ 00:11:58.225 "e7e3b293-27bd-4a89-87e4-66f9529c470d" 00:11:58.225 ], 00:11:58.225 "product_name": "Malloc disk", 00:11:58.225 "block_size": 512, 00:11:58.225 "num_blocks": 65536, 00:11:58.225 "uuid": "e7e3b293-27bd-4a89-87e4-66f9529c470d", 00:11:58.225 "assigned_rate_limits": { 00:11:58.225 "rw_ios_per_sec": 0, 00:11:58.225 "rw_mbytes_per_sec": 0, 00:11:58.225 "r_mbytes_per_sec": 0, 00:11:58.225 "w_mbytes_per_sec": 0 00:11:58.225 }, 00:11:58.225 "claimed": false, 00:11:58.225 "zoned": false, 00:11:58.225 "supported_io_types": { 00:11:58.225 "read": true, 00:11:58.225 "write": true, 00:11:58.225 "unmap": true, 00:11:58.225 "flush": true, 00:11:58.225 "reset": true, 00:11:58.225 "nvme_admin": false, 00:11:58.225 "nvme_io": false, 00:11:58.225 "nvme_io_md": false, 00:11:58.225 "write_zeroes": true, 00:11:58.225 "zcopy": true, 00:11:58.225 "get_zone_info": false, 00:11:58.225 "zone_management": false, 00:11:58.225 "zone_append": false, 00:11:58.225 "compare": false, 00:11:58.225 "compare_and_write": false, 00:11:58.225 "abort": true, 00:11:58.225 "seek_hole": false, 00:11:58.225 "seek_data": false, 00:11:58.225 "copy": true, 00:11:58.225 "nvme_iov_md": false 00:11:58.225 }, 00:11:58.225 "memory_domains": [ 00:11:58.225 { 00:11:58.225 "dma_device_id": "system", 00:11:58.225 "dma_device_type": 1 00:11:58.225 }, 00:11:58.225 { 00:11:58.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.225 "dma_device_type": 2 00:11:58.225 } 00:11:58.225 ], 00:11:58.225 "driver_specific": {} 00:11:58.225 } 00:11:58.225 ] 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.225 BaseBdev3 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.225 [ 00:11:58.225 { 00:11:58.225 "name": "BaseBdev3", 00:11:58.225 "aliases": [ 00:11:58.225 "c05061fc-1ae7-410b-85b7-18efdacac60b" 00:11:58.225 ], 00:11:58.225 "product_name": "Malloc disk", 00:11:58.225 "block_size": 512, 00:11:58.225 "num_blocks": 65536, 00:11:58.225 "uuid": "c05061fc-1ae7-410b-85b7-18efdacac60b", 00:11:58.225 "assigned_rate_limits": { 00:11:58.225 "rw_ios_per_sec": 0, 00:11:58.225 "rw_mbytes_per_sec": 0, 00:11:58.225 "r_mbytes_per_sec": 0, 00:11:58.225 "w_mbytes_per_sec": 0 00:11:58.225 }, 00:11:58.225 "claimed": false, 00:11:58.225 "zoned": false, 00:11:58.225 "supported_io_types": { 00:11:58.225 "read": true, 00:11:58.225 "write": true, 00:11:58.225 "unmap": true, 00:11:58.225 "flush": true, 00:11:58.225 "reset": true, 00:11:58.225 "nvme_admin": false, 00:11:58.225 "nvme_io": false, 00:11:58.225 "nvme_io_md": false, 00:11:58.225 "write_zeroes": true, 00:11:58.225 "zcopy": true, 00:11:58.225 "get_zone_info": false, 00:11:58.225 "zone_management": false, 00:11:58.225 "zone_append": false, 00:11:58.225 "compare": false, 00:11:58.225 "compare_and_write": false, 00:11:58.225 "abort": true, 00:11:58.225 "seek_hole": false, 00:11:58.225 "seek_data": false, 00:11:58.225 "copy": true, 00:11:58.225 "nvme_iov_md": false 00:11:58.225 }, 00:11:58.225 "memory_domains": [ 00:11:58.225 { 00:11:58.225 "dma_device_id": "system", 00:11:58.225 "dma_device_type": 1 00:11:58.225 }, 00:11:58.225 { 00:11:58.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.225 "dma_device_type": 2 00:11:58.225 } 00:11:58.225 ], 00:11:58.225 "driver_specific": {} 00:11:58.225 } 00:11:58.225 ] 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.225 BaseBdev4 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.225 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.485 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.485 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:58.485 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.485 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.485 [ 00:11:58.485 { 00:11:58.485 "name": "BaseBdev4", 00:11:58.485 "aliases": [ 00:11:58.485 "ea0599bb-8e15-41e1-bf2e-a8bf1780cca8" 00:11:58.485 ], 00:11:58.485 "product_name": "Malloc disk", 00:11:58.485 "block_size": 512, 00:11:58.485 "num_blocks": 65536, 00:11:58.485 "uuid": "ea0599bb-8e15-41e1-bf2e-a8bf1780cca8", 00:11:58.485 "assigned_rate_limits": { 00:11:58.485 "rw_ios_per_sec": 0, 00:11:58.485 "rw_mbytes_per_sec": 0, 00:11:58.485 "r_mbytes_per_sec": 0, 00:11:58.485 "w_mbytes_per_sec": 0 00:11:58.485 }, 00:11:58.485 "claimed": false, 00:11:58.485 "zoned": false, 00:11:58.485 "supported_io_types": { 00:11:58.485 "read": true, 00:11:58.485 "write": true, 00:11:58.485 "unmap": true, 00:11:58.485 "flush": true, 00:11:58.485 "reset": true, 00:11:58.485 "nvme_admin": false, 00:11:58.485 "nvme_io": false, 00:11:58.485 "nvme_io_md": false, 00:11:58.485 "write_zeroes": true, 00:11:58.485 "zcopy": true, 00:11:58.485 "get_zone_info": false, 00:11:58.485 "zone_management": false, 00:11:58.485 "zone_append": false, 00:11:58.485 "compare": false, 00:11:58.485 "compare_and_write": false, 00:11:58.485 "abort": true, 00:11:58.485 "seek_hole": false, 00:11:58.485 "seek_data": false, 00:11:58.485 "copy": true, 00:11:58.485 "nvme_iov_md": false 00:11:58.485 }, 00:11:58.485 "memory_domains": [ 00:11:58.485 { 00:11:58.485 "dma_device_id": "system", 00:11:58.485 "dma_device_type": 1 00:11:58.485 }, 00:11:58.485 { 00:11:58.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.485 "dma_device_type": 2 00:11:58.485 } 00:11:58.485 ], 00:11:58.485 "driver_specific": {} 00:11:58.485 } 00:11:58.485 ] 00:11:58.485 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.485 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:58.486 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:58.486 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:58.486 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:58.486 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.486 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.486 [2024-11-20 11:21:41.375123] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:58.486 [2024-11-20 11:21:41.375229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:58.486 [2024-11-20 11:21:41.375281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:58.486 [2024-11-20 11:21:41.377398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:58.486 [2024-11-20 11:21:41.377537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:58.486 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.486 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:58.486 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.486 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.486 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:58.486 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.486 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.486 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.486 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.486 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.486 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.486 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.486 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.486 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.486 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.486 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.486 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.486 "name": "Existed_Raid", 00:11:58.486 "uuid": "8c669966-10d3-4fb1-bf33-5995783b373f", 00:11:58.486 "strip_size_kb": 64, 00:11:58.486 "state": "configuring", 00:11:58.486 "raid_level": "raid0", 00:11:58.486 "superblock": true, 00:11:58.486 "num_base_bdevs": 4, 00:11:58.486 "num_base_bdevs_discovered": 3, 00:11:58.486 "num_base_bdevs_operational": 4, 00:11:58.486 "base_bdevs_list": [ 00:11:58.486 { 00:11:58.486 "name": "BaseBdev1", 00:11:58.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.486 "is_configured": false, 00:11:58.486 "data_offset": 0, 00:11:58.486 "data_size": 0 00:11:58.486 }, 00:11:58.486 { 00:11:58.486 "name": "BaseBdev2", 00:11:58.486 "uuid": "e7e3b293-27bd-4a89-87e4-66f9529c470d", 00:11:58.486 "is_configured": true, 00:11:58.486 "data_offset": 2048, 00:11:58.486 "data_size": 63488 00:11:58.486 }, 00:11:58.486 { 00:11:58.486 "name": "BaseBdev3", 00:11:58.486 "uuid": "c05061fc-1ae7-410b-85b7-18efdacac60b", 00:11:58.486 "is_configured": true, 00:11:58.486 "data_offset": 2048, 00:11:58.486 "data_size": 63488 00:11:58.486 }, 00:11:58.486 { 00:11:58.486 "name": "BaseBdev4", 00:11:58.486 "uuid": "ea0599bb-8e15-41e1-bf2e-a8bf1780cca8", 00:11:58.486 "is_configured": true, 00:11:58.486 "data_offset": 2048, 00:11:58.486 "data_size": 63488 00:11:58.486 } 00:11:58.486 ] 00:11:58.486 }' 00:11:58.486 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.486 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.746 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:58.746 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.746 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.746 [2024-11-20 11:21:41.842328] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:58.746 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.746 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:58.746 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.746 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.746 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:58.746 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.746 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.746 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.746 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.746 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.746 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.746 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.746 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.746 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.746 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.006 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.006 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.006 "name": "Existed_Raid", 00:11:59.006 "uuid": "8c669966-10d3-4fb1-bf33-5995783b373f", 00:11:59.006 "strip_size_kb": 64, 00:11:59.006 "state": "configuring", 00:11:59.006 "raid_level": "raid0", 00:11:59.006 "superblock": true, 00:11:59.006 "num_base_bdevs": 4, 00:11:59.006 "num_base_bdevs_discovered": 2, 00:11:59.006 "num_base_bdevs_operational": 4, 00:11:59.006 "base_bdevs_list": [ 00:11:59.006 { 00:11:59.006 "name": "BaseBdev1", 00:11:59.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.006 "is_configured": false, 00:11:59.006 "data_offset": 0, 00:11:59.006 "data_size": 0 00:11:59.006 }, 00:11:59.006 { 00:11:59.006 "name": null, 00:11:59.006 "uuid": "e7e3b293-27bd-4a89-87e4-66f9529c470d", 00:11:59.006 "is_configured": false, 00:11:59.006 "data_offset": 0, 00:11:59.006 "data_size": 63488 00:11:59.006 }, 00:11:59.006 { 00:11:59.006 "name": "BaseBdev3", 00:11:59.006 "uuid": "c05061fc-1ae7-410b-85b7-18efdacac60b", 00:11:59.006 "is_configured": true, 00:11:59.007 "data_offset": 2048, 00:11:59.007 "data_size": 63488 00:11:59.007 }, 00:11:59.007 { 00:11:59.007 "name": "BaseBdev4", 00:11:59.007 "uuid": "ea0599bb-8e15-41e1-bf2e-a8bf1780cca8", 00:11:59.007 "is_configured": true, 00:11:59.007 "data_offset": 2048, 00:11:59.007 "data_size": 63488 00:11:59.007 } 00:11:59.007 ] 00:11:59.007 }' 00:11:59.007 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.007 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.265 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:59.265 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.265 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.265 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.265 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.525 [2024-11-20 11:21:42.431992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:59.525 BaseBdev1 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.525 [ 00:11:59.525 { 00:11:59.525 "name": "BaseBdev1", 00:11:59.525 "aliases": [ 00:11:59.525 "949a0912-8422-47ff-bbb0-3ca1d90b5221" 00:11:59.525 ], 00:11:59.525 "product_name": "Malloc disk", 00:11:59.525 "block_size": 512, 00:11:59.525 "num_blocks": 65536, 00:11:59.525 "uuid": "949a0912-8422-47ff-bbb0-3ca1d90b5221", 00:11:59.525 "assigned_rate_limits": { 00:11:59.525 "rw_ios_per_sec": 0, 00:11:59.525 "rw_mbytes_per_sec": 0, 00:11:59.525 "r_mbytes_per_sec": 0, 00:11:59.525 "w_mbytes_per_sec": 0 00:11:59.525 }, 00:11:59.525 "claimed": true, 00:11:59.525 "claim_type": "exclusive_write", 00:11:59.525 "zoned": false, 00:11:59.525 "supported_io_types": { 00:11:59.525 "read": true, 00:11:59.525 "write": true, 00:11:59.525 "unmap": true, 00:11:59.525 "flush": true, 00:11:59.525 "reset": true, 00:11:59.525 "nvme_admin": false, 00:11:59.525 "nvme_io": false, 00:11:59.525 "nvme_io_md": false, 00:11:59.525 "write_zeroes": true, 00:11:59.525 "zcopy": true, 00:11:59.525 "get_zone_info": false, 00:11:59.525 "zone_management": false, 00:11:59.525 "zone_append": false, 00:11:59.525 "compare": false, 00:11:59.525 "compare_and_write": false, 00:11:59.525 "abort": true, 00:11:59.525 "seek_hole": false, 00:11:59.525 "seek_data": false, 00:11:59.525 "copy": true, 00:11:59.525 "nvme_iov_md": false 00:11:59.525 }, 00:11:59.525 "memory_domains": [ 00:11:59.525 { 00:11:59.525 "dma_device_id": "system", 00:11:59.525 "dma_device_type": 1 00:11:59.525 }, 00:11:59.525 { 00:11:59.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.525 "dma_device_type": 2 00:11:59.525 } 00:11:59.525 ], 00:11:59.525 "driver_specific": {} 00:11:59.525 } 00:11:59.525 ] 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.525 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.525 "name": "Existed_Raid", 00:11:59.525 "uuid": "8c669966-10d3-4fb1-bf33-5995783b373f", 00:11:59.525 "strip_size_kb": 64, 00:11:59.525 "state": "configuring", 00:11:59.525 "raid_level": "raid0", 00:11:59.525 "superblock": true, 00:11:59.525 "num_base_bdevs": 4, 00:11:59.525 "num_base_bdevs_discovered": 3, 00:11:59.525 "num_base_bdevs_operational": 4, 00:11:59.525 "base_bdevs_list": [ 00:11:59.525 { 00:11:59.525 "name": "BaseBdev1", 00:11:59.525 "uuid": "949a0912-8422-47ff-bbb0-3ca1d90b5221", 00:11:59.525 "is_configured": true, 00:11:59.525 "data_offset": 2048, 00:11:59.525 "data_size": 63488 00:11:59.526 }, 00:11:59.526 { 00:11:59.526 "name": null, 00:11:59.526 "uuid": "e7e3b293-27bd-4a89-87e4-66f9529c470d", 00:11:59.526 "is_configured": false, 00:11:59.526 "data_offset": 0, 00:11:59.526 "data_size": 63488 00:11:59.526 }, 00:11:59.526 { 00:11:59.526 "name": "BaseBdev3", 00:11:59.526 "uuid": "c05061fc-1ae7-410b-85b7-18efdacac60b", 00:11:59.526 "is_configured": true, 00:11:59.526 "data_offset": 2048, 00:11:59.526 "data_size": 63488 00:11:59.526 }, 00:11:59.526 { 00:11:59.526 "name": "BaseBdev4", 00:11:59.526 "uuid": "ea0599bb-8e15-41e1-bf2e-a8bf1780cca8", 00:11:59.526 "is_configured": true, 00:11:59.526 "data_offset": 2048, 00:11:59.526 "data_size": 63488 00:11:59.526 } 00:11:59.526 ] 00:11:59.526 }' 00:11:59.526 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.526 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.100 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.100 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.100 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.100 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:00.100 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.100 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:00.100 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:00.100 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.100 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.100 [2024-11-20 11:21:42.995373] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:00.100 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.100 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:00.100 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.100 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.101 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:00.101 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.101 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.101 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.101 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.101 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.101 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.101 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.101 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.101 11:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.101 11:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.101 11:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.101 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.101 "name": "Existed_Raid", 00:12:00.101 "uuid": "8c669966-10d3-4fb1-bf33-5995783b373f", 00:12:00.101 "strip_size_kb": 64, 00:12:00.101 "state": "configuring", 00:12:00.101 "raid_level": "raid0", 00:12:00.101 "superblock": true, 00:12:00.101 "num_base_bdevs": 4, 00:12:00.101 "num_base_bdevs_discovered": 2, 00:12:00.101 "num_base_bdevs_operational": 4, 00:12:00.101 "base_bdevs_list": [ 00:12:00.101 { 00:12:00.101 "name": "BaseBdev1", 00:12:00.101 "uuid": "949a0912-8422-47ff-bbb0-3ca1d90b5221", 00:12:00.101 "is_configured": true, 00:12:00.101 "data_offset": 2048, 00:12:00.101 "data_size": 63488 00:12:00.101 }, 00:12:00.101 { 00:12:00.101 "name": null, 00:12:00.101 "uuid": "e7e3b293-27bd-4a89-87e4-66f9529c470d", 00:12:00.101 "is_configured": false, 00:12:00.101 "data_offset": 0, 00:12:00.101 "data_size": 63488 00:12:00.101 }, 00:12:00.101 { 00:12:00.101 "name": null, 00:12:00.101 "uuid": "c05061fc-1ae7-410b-85b7-18efdacac60b", 00:12:00.101 "is_configured": false, 00:12:00.101 "data_offset": 0, 00:12:00.101 "data_size": 63488 00:12:00.101 }, 00:12:00.101 { 00:12:00.101 "name": "BaseBdev4", 00:12:00.101 "uuid": "ea0599bb-8e15-41e1-bf2e-a8bf1780cca8", 00:12:00.101 "is_configured": true, 00:12:00.101 "data_offset": 2048, 00:12:00.101 "data_size": 63488 00:12:00.101 } 00:12:00.101 ] 00:12:00.101 }' 00:12:00.101 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.101 11:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.373 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.373 11:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.373 11:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.373 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:00.373 11:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.632 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:00.632 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:00.632 11:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.632 11:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.632 [2024-11-20 11:21:43.506537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:00.632 11:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.632 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:00.632 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.632 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.632 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:00.632 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.632 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.632 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.632 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.632 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.632 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.632 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.632 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.632 11:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.632 11:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.632 11:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.632 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.632 "name": "Existed_Raid", 00:12:00.632 "uuid": "8c669966-10d3-4fb1-bf33-5995783b373f", 00:12:00.632 "strip_size_kb": 64, 00:12:00.632 "state": "configuring", 00:12:00.632 "raid_level": "raid0", 00:12:00.632 "superblock": true, 00:12:00.632 "num_base_bdevs": 4, 00:12:00.632 "num_base_bdevs_discovered": 3, 00:12:00.632 "num_base_bdevs_operational": 4, 00:12:00.632 "base_bdevs_list": [ 00:12:00.632 { 00:12:00.632 "name": "BaseBdev1", 00:12:00.632 "uuid": "949a0912-8422-47ff-bbb0-3ca1d90b5221", 00:12:00.632 "is_configured": true, 00:12:00.632 "data_offset": 2048, 00:12:00.632 "data_size": 63488 00:12:00.632 }, 00:12:00.632 { 00:12:00.632 "name": null, 00:12:00.632 "uuid": "e7e3b293-27bd-4a89-87e4-66f9529c470d", 00:12:00.632 "is_configured": false, 00:12:00.632 "data_offset": 0, 00:12:00.632 "data_size": 63488 00:12:00.632 }, 00:12:00.632 { 00:12:00.632 "name": "BaseBdev3", 00:12:00.632 "uuid": "c05061fc-1ae7-410b-85b7-18efdacac60b", 00:12:00.632 "is_configured": true, 00:12:00.632 "data_offset": 2048, 00:12:00.632 "data_size": 63488 00:12:00.632 }, 00:12:00.632 { 00:12:00.632 "name": "BaseBdev4", 00:12:00.632 "uuid": "ea0599bb-8e15-41e1-bf2e-a8bf1780cca8", 00:12:00.632 "is_configured": true, 00:12:00.632 "data_offset": 2048, 00:12:00.632 "data_size": 63488 00:12:00.632 } 00:12:00.632 ] 00:12:00.632 }' 00:12:00.632 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.632 11:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.891 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.891 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:00.891 11:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.891 11:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.891 11:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.151 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:01.151 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:01.151 11:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.151 11:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.151 [2024-11-20 11:21:44.025697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:01.151 11:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.151 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:01.151 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.151 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.151 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:01.151 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.151 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.151 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.151 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.151 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.151 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.151 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.151 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.151 11:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.151 11:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.151 11:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.151 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.151 "name": "Existed_Raid", 00:12:01.151 "uuid": "8c669966-10d3-4fb1-bf33-5995783b373f", 00:12:01.151 "strip_size_kb": 64, 00:12:01.151 "state": "configuring", 00:12:01.151 "raid_level": "raid0", 00:12:01.151 "superblock": true, 00:12:01.151 "num_base_bdevs": 4, 00:12:01.152 "num_base_bdevs_discovered": 2, 00:12:01.152 "num_base_bdevs_operational": 4, 00:12:01.152 "base_bdevs_list": [ 00:12:01.152 { 00:12:01.152 "name": null, 00:12:01.152 "uuid": "949a0912-8422-47ff-bbb0-3ca1d90b5221", 00:12:01.152 "is_configured": false, 00:12:01.152 "data_offset": 0, 00:12:01.152 "data_size": 63488 00:12:01.152 }, 00:12:01.152 { 00:12:01.152 "name": null, 00:12:01.152 "uuid": "e7e3b293-27bd-4a89-87e4-66f9529c470d", 00:12:01.152 "is_configured": false, 00:12:01.152 "data_offset": 0, 00:12:01.152 "data_size": 63488 00:12:01.152 }, 00:12:01.152 { 00:12:01.152 "name": "BaseBdev3", 00:12:01.152 "uuid": "c05061fc-1ae7-410b-85b7-18efdacac60b", 00:12:01.152 "is_configured": true, 00:12:01.152 "data_offset": 2048, 00:12:01.152 "data_size": 63488 00:12:01.152 }, 00:12:01.152 { 00:12:01.152 "name": "BaseBdev4", 00:12:01.152 "uuid": "ea0599bb-8e15-41e1-bf2e-a8bf1780cca8", 00:12:01.152 "is_configured": true, 00:12:01.152 "data_offset": 2048, 00:12:01.152 "data_size": 63488 00:12:01.152 } 00:12:01.152 ] 00:12:01.152 }' 00:12:01.152 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.152 11:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.720 [2024-11-20 11:21:44.645642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.720 "name": "Existed_Raid", 00:12:01.720 "uuid": "8c669966-10d3-4fb1-bf33-5995783b373f", 00:12:01.720 "strip_size_kb": 64, 00:12:01.720 "state": "configuring", 00:12:01.720 "raid_level": "raid0", 00:12:01.720 "superblock": true, 00:12:01.720 "num_base_bdevs": 4, 00:12:01.720 "num_base_bdevs_discovered": 3, 00:12:01.720 "num_base_bdevs_operational": 4, 00:12:01.720 "base_bdevs_list": [ 00:12:01.720 { 00:12:01.720 "name": null, 00:12:01.720 "uuid": "949a0912-8422-47ff-bbb0-3ca1d90b5221", 00:12:01.720 "is_configured": false, 00:12:01.720 "data_offset": 0, 00:12:01.720 "data_size": 63488 00:12:01.720 }, 00:12:01.720 { 00:12:01.720 "name": "BaseBdev2", 00:12:01.720 "uuid": "e7e3b293-27bd-4a89-87e4-66f9529c470d", 00:12:01.720 "is_configured": true, 00:12:01.720 "data_offset": 2048, 00:12:01.720 "data_size": 63488 00:12:01.720 }, 00:12:01.720 { 00:12:01.720 "name": "BaseBdev3", 00:12:01.720 "uuid": "c05061fc-1ae7-410b-85b7-18efdacac60b", 00:12:01.720 "is_configured": true, 00:12:01.720 "data_offset": 2048, 00:12:01.720 "data_size": 63488 00:12:01.720 }, 00:12:01.720 { 00:12:01.720 "name": "BaseBdev4", 00:12:01.720 "uuid": "ea0599bb-8e15-41e1-bf2e-a8bf1780cca8", 00:12:01.720 "is_configured": true, 00:12:01.720 "data_offset": 2048, 00:12:01.720 "data_size": 63488 00:12:01.720 } 00:12:01.720 ] 00:12:01.720 }' 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.720 11:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.980 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.980 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:01.980 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.980 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.980 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.980 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:01.980 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:01.980 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.980 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.980 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.240 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.240 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 949a0912-8422-47ff-bbb0-3ca1d90b5221 00:12:02.240 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.240 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.240 [2024-11-20 11:21:45.172429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:02.240 [2024-11-20 11:21:45.172737] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:02.240 [2024-11-20 11:21:45.172751] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:02.240 [2024-11-20 11:21:45.173023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:02.240 [2024-11-20 11:21:45.173173] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:02.240 [2024-11-20 11:21:45.173199] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:02.240 [2024-11-20 11:21:45.173332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.240 NewBaseBdev 00:12:02.240 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.240 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:02.240 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:02.240 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:02.240 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:02.240 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:02.240 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:02.240 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:02.240 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.240 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.240 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.240 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:02.240 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.240 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.240 [ 00:12:02.240 { 00:12:02.240 "name": "NewBaseBdev", 00:12:02.240 "aliases": [ 00:12:02.240 "949a0912-8422-47ff-bbb0-3ca1d90b5221" 00:12:02.240 ], 00:12:02.240 "product_name": "Malloc disk", 00:12:02.240 "block_size": 512, 00:12:02.240 "num_blocks": 65536, 00:12:02.240 "uuid": "949a0912-8422-47ff-bbb0-3ca1d90b5221", 00:12:02.240 "assigned_rate_limits": { 00:12:02.240 "rw_ios_per_sec": 0, 00:12:02.240 "rw_mbytes_per_sec": 0, 00:12:02.240 "r_mbytes_per_sec": 0, 00:12:02.240 "w_mbytes_per_sec": 0 00:12:02.240 }, 00:12:02.240 "claimed": true, 00:12:02.240 "claim_type": "exclusive_write", 00:12:02.240 "zoned": false, 00:12:02.240 "supported_io_types": { 00:12:02.240 "read": true, 00:12:02.240 "write": true, 00:12:02.240 "unmap": true, 00:12:02.240 "flush": true, 00:12:02.240 "reset": true, 00:12:02.240 "nvme_admin": false, 00:12:02.240 "nvme_io": false, 00:12:02.240 "nvme_io_md": false, 00:12:02.240 "write_zeroes": true, 00:12:02.240 "zcopy": true, 00:12:02.240 "get_zone_info": false, 00:12:02.240 "zone_management": false, 00:12:02.240 "zone_append": false, 00:12:02.240 "compare": false, 00:12:02.240 "compare_and_write": false, 00:12:02.240 "abort": true, 00:12:02.240 "seek_hole": false, 00:12:02.240 "seek_data": false, 00:12:02.240 "copy": true, 00:12:02.240 "nvme_iov_md": false 00:12:02.240 }, 00:12:02.240 "memory_domains": [ 00:12:02.240 { 00:12:02.240 "dma_device_id": "system", 00:12:02.240 "dma_device_type": 1 00:12:02.240 }, 00:12:02.240 { 00:12:02.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.240 "dma_device_type": 2 00:12:02.240 } 00:12:02.240 ], 00:12:02.240 "driver_specific": {} 00:12:02.240 } 00:12:02.240 ] 00:12:02.240 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.240 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:02.240 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:02.240 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.240 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.240 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:02.240 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.241 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.241 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.241 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.241 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.241 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.241 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.241 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.241 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.241 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.241 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.241 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.241 "name": "Existed_Raid", 00:12:02.241 "uuid": "8c669966-10d3-4fb1-bf33-5995783b373f", 00:12:02.241 "strip_size_kb": 64, 00:12:02.241 "state": "online", 00:12:02.241 "raid_level": "raid0", 00:12:02.241 "superblock": true, 00:12:02.241 "num_base_bdevs": 4, 00:12:02.241 "num_base_bdevs_discovered": 4, 00:12:02.241 "num_base_bdevs_operational": 4, 00:12:02.241 "base_bdevs_list": [ 00:12:02.241 { 00:12:02.241 "name": "NewBaseBdev", 00:12:02.241 "uuid": "949a0912-8422-47ff-bbb0-3ca1d90b5221", 00:12:02.241 "is_configured": true, 00:12:02.241 "data_offset": 2048, 00:12:02.241 "data_size": 63488 00:12:02.241 }, 00:12:02.241 { 00:12:02.241 "name": "BaseBdev2", 00:12:02.241 "uuid": "e7e3b293-27bd-4a89-87e4-66f9529c470d", 00:12:02.241 "is_configured": true, 00:12:02.241 "data_offset": 2048, 00:12:02.241 "data_size": 63488 00:12:02.241 }, 00:12:02.241 { 00:12:02.241 "name": "BaseBdev3", 00:12:02.241 "uuid": "c05061fc-1ae7-410b-85b7-18efdacac60b", 00:12:02.241 "is_configured": true, 00:12:02.241 "data_offset": 2048, 00:12:02.241 "data_size": 63488 00:12:02.241 }, 00:12:02.241 { 00:12:02.241 "name": "BaseBdev4", 00:12:02.241 "uuid": "ea0599bb-8e15-41e1-bf2e-a8bf1780cca8", 00:12:02.241 "is_configured": true, 00:12:02.241 "data_offset": 2048, 00:12:02.241 "data_size": 63488 00:12:02.241 } 00:12:02.241 ] 00:12:02.241 }' 00:12:02.241 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.241 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.808 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.809 [2024-11-20 11:21:45.680110] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:02.809 "name": "Existed_Raid", 00:12:02.809 "aliases": [ 00:12:02.809 "8c669966-10d3-4fb1-bf33-5995783b373f" 00:12:02.809 ], 00:12:02.809 "product_name": "Raid Volume", 00:12:02.809 "block_size": 512, 00:12:02.809 "num_blocks": 253952, 00:12:02.809 "uuid": "8c669966-10d3-4fb1-bf33-5995783b373f", 00:12:02.809 "assigned_rate_limits": { 00:12:02.809 "rw_ios_per_sec": 0, 00:12:02.809 "rw_mbytes_per_sec": 0, 00:12:02.809 "r_mbytes_per_sec": 0, 00:12:02.809 "w_mbytes_per_sec": 0 00:12:02.809 }, 00:12:02.809 "claimed": false, 00:12:02.809 "zoned": false, 00:12:02.809 "supported_io_types": { 00:12:02.809 "read": true, 00:12:02.809 "write": true, 00:12:02.809 "unmap": true, 00:12:02.809 "flush": true, 00:12:02.809 "reset": true, 00:12:02.809 "nvme_admin": false, 00:12:02.809 "nvme_io": false, 00:12:02.809 "nvme_io_md": false, 00:12:02.809 "write_zeroes": true, 00:12:02.809 "zcopy": false, 00:12:02.809 "get_zone_info": false, 00:12:02.809 "zone_management": false, 00:12:02.809 "zone_append": false, 00:12:02.809 "compare": false, 00:12:02.809 "compare_and_write": false, 00:12:02.809 "abort": false, 00:12:02.809 "seek_hole": false, 00:12:02.809 "seek_data": false, 00:12:02.809 "copy": false, 00:12:02.809 "nvme_iov_md": false 00:12:02.809 }, 00:12:02.809 "memory_domains": [ 00:12:02.809 { 00:12:02.809 "dma_device_id": "system", 00:12:02.809 "dma_device_type": 1 00:12:02.809 }, 00:12:02.809 { 00:12:02.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.809 "dma_device_type": 2 00:12:02.809 }, 00:12:02.809 { 00:12:02.809 "dma_device_id": "system", 00:12:02.809 "dma_device_type": 1 00:12:02.809 }, 00:12:02.809 { 00:12:02.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.809 "dma_device_type": 2 00:12:02.809 }, 00:12:02.809 { 00:12:02.809 "dma_device_id": "system", 00:12:02.809 "dma_device_type": 1 00:12:02.809 }, 00:12:02.809 { 00:12:02.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.809 "dma_device_type": 2 00:12:02.809 }, 00:12:02.809 { 00:12:02.809 "dma_device_id": "system", 00:12:02.809 "dma_device_type": 1 00:12:02.809 }, 00:12:02.809 { 00:12:02.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.809 "dma_device_type": 2 00:12:02.809 } 00:12:02.809 ], 00:12:02.809 "driver_specific": { 00:12:02.809 "raid": { 00:12:02.809 "uuid": "8c669966-10d3-4fb1-bf33-5995783b373f", 00:12:02.809 "strip_size_kb": 64, 00:12:02.809 "state": "online", 00:12:02.809 "raid_level": "raid0", 00:12:02.809 "superblock": true, 00:12:02.809 "num_base_bdevs": 4, 00:12:02.809 "num_base_bdevs_discovered": 4, 00:12:02.809 "num_base_bdevs_operational": 4, 00:12:02.809 "base_bdevs_list": [ 00:12:02.809 { 00:12:02.809 "name": "NewBaseBdev", 00:12:02.809 "uuid": "949a0912-8422-47ff-bbb0-3ca1d90b5221", 00:12:02.809 "is_configured": true, 00:12:02.809 "data_offset": 2048, 00:12:02.809 "data_size": 63488 00:12:02.809 }, 00:12:02.809 { 00:12:02.809 "name": "BaseBdev2", 00:12:02.809 "uuid": "e7e3b293-27bd-4a89-87e4-66f9529c470d", 00:12:02.809 "is_configured": true, 00:12:02.809 "data_offset": 2048, 00:12:02.809 "data_size": 63488 00:12:02.809 }, 00:12:02.809 { 00:12:02.809 "name": "BaseBdev3", 00:12:02.809 "uuid": "c05061fc-1ae7-410b-85b7-18efdacac60b", 00:12:02.809 "is_configured": true, 00:12:02.809 "data_offset": 2048, 00:12:02.809 "data_size": 63488 00:12:02.809 }, 00:12:02.809 { 00:12:02.809 "name": "BaseBdev4", 00:12:02.809 "uuid": "ea0599bb-8e15-41e1-bf2e-a8bf1780cca8", 00:12:02.809 "is_configured": true, 00:12:02.809 "data_offset": 2048, 00:12:02.809 "data_size": 63488 00:12:02.809 } 00:12:02.809 ] 00:12:02.809 } 00:12:02.809 } 00:12:02.809 }' 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:02.809 BaseBdev2 00:12:02.809 BaseBdev3 00:12:02.809 BaseBdev4' 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.809 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.093 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.093 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.093 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.093 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.093 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:03.093 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.093 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.093 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.093 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.093 11:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.093 11:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.093 11:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:03.093 11:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.093 11:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.093 [2024-11-20 11:21:46.023151] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:03.093 [2024-11-20 11:21:46.023194] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.093 [2024-11-20 11:21:46.023309] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.093 [2024-11-20 11:21:46.023388] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.093 [2024-11-20 11:21:46.023400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:03.093 11:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.093 11:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70190 00:12:03.093 11:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70190 ']' 00:12:03.093 11:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70190 00:12:03.093 11:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:03.094 11:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:03.094 11:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70190 00:12:03.094 killing process with pid 70190 00:12:03.094 11:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:03.094 11:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:03.094 11:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70190' 00:12:03.094 11:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70190 00:12:03.094 [2024-11-20 11:21:46.062125] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:03.094 11:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70190 00:12:03.663 [2024-11-20 11:21:46.515232] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:05.044 11:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:05.044 ************************************ 00:12:05.044 END TEST raid_state_function_test_sb 00:12:05.044 ************************************ 00:12:05.044 00:12:05.044 real 0m12.219s 00:12:05.044 user 0m19.420s 00:12:05.044 sys 0m1.969s 00:12:05.044 11:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.044 11:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.044 11:21:47 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:12:05.044 11:21:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:05.044 11:21:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.044 11:21:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:05.044 ************************************ 00:12:05.044 START TEST raid_superblock_test 00:12:05.044 ************************************ 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:05.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70866 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70866 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70866 ']' 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:05.044 11:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.044 [2024-11-20 11:21:47.917389] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:12:05.044 [2024-11-20 11:21:47.917557] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70866 ] 00:12:05.044 [2024-11-20 11:21:48.081852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.304 [2024-11-20 11:21:48.217156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.565 [2024-11-20 11:21:48.449763] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.565 [2024-11-20 11:21:48.449818] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.825 malloc1 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.825 [2024-11-20 11:21:48.869079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:05.825 [2024-11-20 11:21:48.869156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.825 [2024-11-20 11:21:48.869186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:05.825 [2024-11-20 11:21:48.869198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.825 [2024-11-20 11:21:48.871682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.825 [2024-11-20 11:21:48.871715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:05.825 pt1 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.825 malloc2 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.825 11:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.825 [2024-11-20 11:21:48.928537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:05.825 [2024-11-20 11:21:48.928603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.825 [2024-11-20 11:21:48.928628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:05.825 [2024-11-20 11:21:48.928638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.825 [2024-11-20 11:21:48.931069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.825 [2024-11-20 11:21:48.931118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:05.825 pt2 00:12:05.826 11:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.826 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:05.826 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:05.826 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:05.826 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:05.826 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:05.826 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:05.826 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:05.826 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:05.826 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:05.826 11:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.826 11:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.085 malloc3 00:12:06.085 11:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.085 11:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:06.085 11:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.085 11:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.085 [2024-11-20 11:21:49.003779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:06.085 [2024-11-20 11:21:49.003912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.085 [2024-11-20 11:21:49.003970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:06.085 [2024-11-20 11:21:49.004007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.085 [2024-11-20 11:21:49.006516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.085 [2024-11-20 11:21:49.006596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:06.085 pt3 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.085 malloc4 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.085 [2024-11-20 11:21:49.064488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:06.085 [2024-11-20 11:21:49.064616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.085 [2024-11-20 11:21:49.064662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:06.085 [2024-11-20 11:21:49.064704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.085 [2024-11-20 11:21:49.067170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.085 [2024-11-20 11:21:49.067258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:06.085 pt4 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.085 [2024-11-20 11:21:49.076506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:06.085 [2024-11-20 11:21:49.078674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:06.085 [2024-11-20 11:21:49.078838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:06.085 [2024-11-20 11:21:49.078921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:06.085 [2024-11-20 11:21:49.079143] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:06.085 [2024-11-20 11:21:49.079157] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:06.085 [2024-11-20 11:21:49.079508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:06.085 [2024-11-20 11:21:49.079738] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:06.085 [2024-11-20 11:21:49.079753] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:06.085 [2024-11-20 11:21:49.079955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.085 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.085 "name": "raid_bdev1", 00:12:06.085 "uuid": "e8b62b1d-8bcf-46d3-b948-d45b4ef6a32f", 00:12:06.085 "strip_size_kb": 64, 00:12:06.085 "state": "online", 00:12:06.085 "raid_level": "raid0", 00:12:06.085 "superblock": true, 00:12:06.085 "num_base_bdevs": 4, 00:12:06.085 "num_base_bdevs_discovered": 4, 00:12:06.085 "num_base_bdevs_operational": 4, 00:12:06.085 "base_bdevs_list": [ 00:12:06.085 { 00:12:06.085 "name": "pt1", 00:12:06.085 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:06.085 "is_configured": true, 00:12:06.085 "data_offset": 2048, 00:12:06.085 "data_size": 63488 00:12:06.085 }, 00:12:06.085 { 00:12:06.085 "name": "pt2", 00:12:06.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:06.085 "is_configured": true, 00:12:06.085 "data_offset": 2048, 00:12:06.085 "data_size": 63488 00:12:06.085 }, 00:12:06.085 { 00:12:06.085 "name": "pt3", 00:12:06.085 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:06.085 "is_configured": true, 00:12:06.085 "data_offset": 2048, 00:12:06.085 "data_size": 63488 00:12:06.085 }, 00:12:06.085 { 00:12:06.085 "name": "pt4", 00:12:06.085 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:06.085 "is_configured": true, 00:12:06.086 "data_offset": 2048, 00:12:06.086 "data_size": 63488 00:12:06.086 } 00:12:06.086 ] 00:12:06.086 }' 00:12:06.086 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.086 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.655 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:06.655 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:06.655 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:06.655 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:06.655 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:06.655 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:06.655 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:06.655 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.655 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.655 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:06.655 [2024-11-20 11:21:49.572060] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.655 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.655 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:06.655 "name": "raid_bdev1", 00:12:06.655 "aliases": [ 00:12:06.655 "e8b62b1d-8bcf-46d3-b948-d45b4ef6a32f" 00:12:06.656 ], 00:12:06.656 "product_name": "Raid Volume", 00:12:06.656 "block_size": 512, 00:12:06.656 "num_blocks": 253952, 00:12:06.656 "uuid": "e8b62b1d-8bcf-46d3-b948-d45b4ef6a32f", 00:12:06.656 "assigned_rate_limits": { 00:12:06.656 "rw_ios_per_sec": 0, 00:12:06.656 "rw_mbytes_per_sec": 0, 00:12:06.656 "r_mbytes_per_sec": 0, 00:12:06.656 "w_mbytes_per_sec": 0 00:12:06.656 }, 00:12:06.656 "claimed": false, 00:12:06.656 "zoned": false, 00:12:06.656 "supported_io_types": { 00:12:06.656 "read": true, 00:12:06.656 "write": true, 00:12:06.656 "unmap": true, 00:12:06.656 "flush": true, 00:12:06.656 "reset": true, 00:12:06.656 "nvme_admin": false, 00:12:06.656 "nvme_io": false, 00:12:06.656 "nvme_io_md": false, 00:12:06.656 "write_zeroes": true, 00:12:06.656 "zcopy": false, 00:12:06.656 "get_zone_info": false, 00:12:06.656 "zone_management": false, 00:12:06.656 "zone_append": false, 00:12:06.656 "compare": false, 00:12:06.656 "compare_and_write": false, 00:12:06.656 "abort": false, 00:12:06.656 "seek_hole": false, 00:12:06.656 "seek_data": false, 00:12:06.656 "copy": false, 00:12:06.656 "nvme_iov_md": false 00:12:06.656 }, 00:12:06.656 "memory_domains": [ 00:12:06.656 { 00:12:06.656 "dma_device_id": "system", 00:12:06.656 "dma_device_type": 1 00:12:06.656 }, 00:12:06.656 { 00:12:06.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.656 "dma_device_type": 2 00:12:06.656 }, 00:12:06.656 { 00:12:06.656 "dma_device_id": "system", 00:12:06.656 "dma_device_type": 1 00:12:06.656 }, 00:12:06.656 { 00:12:06.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.656 "dma_device_type": 2 00:12:06.656 }, 00:12:06.656 { 00:12:06.656 "dma_device_id": "system", 00:12:06.656 "dma_device_type": 1 00:12:06.656 }, 00:12:06.656 { 00:12:06.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.656 "dma_device_type": 2 00:12:06.656 }, 00:12:06.656 { 00:12:06.656 "dma_device_id": "system", 00:12:06.656 "dma_device_type": 1 00:12:06.656 }, 00:12:06.656 { 00:12:06.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.656 "dma_device_type": 2 00:12:06.656 } 00:12:06.656 ], 00:12:06.656 "driver_specific": { 00:12:06.656 "raid": { 00:12:06.656 "uuid": "e8b62b1d-8bcf-46d3-b948-d45b4ef6a32f", 00:12:06.656 "strip_size_kb": 64, 00:12:06.656 "state": "online", 00:12:06.656 "raid_level": "raid0", 00:12:06.656 "superblock": true, 00:12:06.656 "num_base_bdevs": 4, 00:12:06.656 "num_base_bdevs_discovered": 4, 00:12:06.656 "num_base_bdevs_operational": 4, 00:12:06.656 "base_bdevs_list": [ 00:12:06.656 { 00:12:06.656 "name": "pt1", 00:12:06.656 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:06.656 "is_configured": true, 00:12:06.656 "data_offset": 2048, 00:12:06.656 "data_size": 63488 00:12:06.656 }, 00:12:06.656 { 00:12:06.656 "name": "pt2", 00:12:06.656 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:06.656 "is_configured": true, 00:12:06.656 "data_offset": 2048, 00:12:06.656 "data_size": 63488 00:12:06.656 }, 00:12:06.656 { 00:12:06.656 "name": "pt3", 00:12:06.656 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:06.656 "is_configured": true, 00:12:06.656 "data_offset": 2048, 00:12:06.656 "data_size": 63488 00:12:06.656 }, 00:12:06.656 { 00:12:06.656 "name": "pt4", 00:12:06.656 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:06.656 "is_configured": true, 00:12:06.656 "data_offset": 2048, 00:12:06.656 "data_size": 63488 00:12:06.656 } 00:12:06.656 ] 00:12:06.656 } 00:12:06.656 } 00:12:06.656 }' 00:12:06.656 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:06.656 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:06.656 pt2 00:12:06.656 pt3 00:12:06.656 pt4' 00:12:06.656 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.656 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:06.656 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.656 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:06.656 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.656 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.656 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.656 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.656 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.656 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.656 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.656 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:06.656 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.656 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.656 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:06.917 [2024-11-20 11:21:49.923439] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e8b62b1d-8bcf-46d3-b948-d45b4ef6a32f 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e8b62b1d-8bcf-46d3-b948-d45b4ef6a32f ']' 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.917 [2024-11-20 11:21:49.966988] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:06.917 [2024-11-20 11:21:49.967024] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:06.917 [2024-11-20 11:21:49.967123] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.917 [2024-11-20 11:21:49.967204] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:06.917 [2024-11-20 11:21:49.967220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.917 11:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.917 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:06.917 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:06.917 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:06.917 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:06.917 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.917 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.179 [2024-11-20 11:21:50.130747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:07.179 [2024-11-20 11:21:50.133001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:07.179 [2024-11-20 11:21:50.133118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:07.179 [2024-11-20 11:21:50.133195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:07.179 [2024-11-20 11:21:50.133295] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:07.179 [2024-11-20 11:21:50.133398] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:07.179 [2024-11-20 11:21:50.133482] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:07.179 [2024-11-20 11:21:50.133555] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:07.179 [2024-11-20 11:21:50.133576] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:07.179 [2024-11-20 11:21:50.133592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:07.179 request: 00:12:07.179 { 00:12:07.179 "name": "raid_bdev1", 00:12:07.179 "raid_level": "raid0", 00:12:07.179 "base_bdevs": [ 00:12:07.179 "malloc1", 00:12:07.179 "malloc2", 00:12:07.179 "malloc3", 00:12:07.179 "malloc4" 00:12:07.179 ], 00:12:07.179 "strip_size_kb": 64, 00:12:07.179 "superblock": false, 00:12:07.179 "method": "bdev_raid_create", 00:12:07.179 "req_id": 1 00:12:07.179 } 00:12:07.179 Got JSON-RPC error response 00:12:07.179 response: 00:12:07.179 { 00:12:07.179 "code": -17, 00:12:07.179 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:07.179 } 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.179 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.179 [2024-11-20 11:21:50.190657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:07.180 [2024-11-20 11:21:50.190817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.180 [2024-11-20 11:21:50.190881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:07.180 [2024-11-20 11:21:50.190922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.180 [2024-11-20 11:21:50.193914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.180 [2024-11-20 11:21:50.194027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:07.180 [2024-11-20 11:21:50.194193] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:07.180 [2024-11-20 11:21:50.194327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:07.180 pt1 00:12:07.180 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.180 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:07.180 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.180 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.180 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:07.180 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.180 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.180 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.180 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.180 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.180 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.180 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.180 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.180 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.180 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.180 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.180 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.180 "name": "raid_bdev1", 00:12:07.180 "uuid": "e8b62b1d-8bcf-46d3-b948-d45b4ef6a32f", 00:12:07.180 "strip_size_kb": 64, 00:12:07.180 "state": "configuring", 00:12:07.180 "raid_level": "raid0", 00:12:07.180 "superblock": true, 00:12:07.180 "num_base_bdevs": 4, 00:12:07.180 "num_base_bdevs_discovered": 1, 00:12:07.180 "num_base_bdevs_operational": 4, 00:12:07.180 "base_bdevs_list": [ 00:12:07.180 { 00:12:07.180 "name": "pt1", 00:12:07.180 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:07.180 "is_configured": true, 00:12:07.180 "data_offset": 2048, 00:12:07.180 "data_size": 63488 00:12:07.180 }, 00:12:07.180 { 00:12:07.180 "name": null, 00:12:07.180 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:07.180 "is_configured": false, 00:12:07.180 "data_offset": 2048, 00:12:07.180 "data_size": 63488 00:12:07.180 }, 00:12:07.180 { 00:12:07.180 "name": null, 00:12:07.180 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:07.180 "is_configured": false, 00:12:07.180 "data_offset": 2048, 00:12:07.180 "data_size": 63488 00:12:07.180 }, 00:12:07.180 { 00:12:07.180 "name": null, 00:12:07.180 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:07.180 "is_configured": false, 00:12:07.180 "data_offset": 2048, 00:12:07.180 "data_size": 63488 00:12:07.180 } 00:12:07.180 ] 00:12:07.180 }' 00:12:07.180 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.180 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.745 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:07.745 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:07.745 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.745 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.745 [2024-11-20 11:21:50.562654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:07.745 [2024-11-20 11:21:50.562826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.746 [2024-11-20 11:21:50.562897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:07.746 [2024-11-20 11:21:50.562949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.746 [2024-11-20 11:21:50.563609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.746 [2024-11-20 11:21:50.563706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:07.746 [2024-11-20 11:21:50.563864] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:07.746 [2024-11-20 11:21:50.563939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:07.746 pt2 00:12:07.746 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.746 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:07.746 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.746 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.746 [2024-11-20 11:21:50.570702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:07.746 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.746 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:07.746 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.746 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.746 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:07.746 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.746 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.746 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.746 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.746 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.746 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.746 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.746 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.746 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.746 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.746 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.746 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.746 "name": "raid_bdev1", 00:12:07.746 "uuid": "e8b62b1d-8bcf-46d3-b948-d45b4ef6a32f", 00:12:07.746 "strip_size_kb": 64, 00:12:07.746 "state": "configuring", 00:12:07.746 "raid_level": "raid0", 00:12:07.746 "superblock": true, 00:12:07.746 "num_base_bdevs": 4, 00:12:07.746 "num_base_bdevs_discovered": 1, 00:12:07.746 "num_base_bdevs_operational": 4, 00:12:07.746 "base_bdevs_list": [ 00:12:07.746 { 00:12:07.746 "name": "pt1", 00:12:07.746 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:07.746 "is_configured": true, 00:12:07.746 "data_offset": 2048, 00:12:07.746 "data_size": 63488 00:12:07.746 }, 00:12:07.746 { 00:12:07.746 "name": null, 00:12:07.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:07.746 "is_configured": false, 00:12:07.746 "data_offset": 0, 00:12:07.746 "data_size": 63488 00:12:07.746 }, 00:12:07.746 { 00:12:07.746 "name": null, 00:12:07.746 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:07.746 "is_configured": false, 00:12:07.746 "data_offset": 2048, 00:12:07.746 "data_size": 63488 00:12:07.746 }, 00:12:07.746 { 00:12:07.746 "name": null, 00:12:07.746 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:07.746 "is_configured": false, 00:12:07.746 "data_offset": 2048, 00:12:07.746 "data_size": 63488 00:12:07.746 } 00:12:07.746 ] 00:12:07.746 }' 00:12:07.746 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.746 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.004 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:08.004 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:08.004 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:08.004 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.004 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.004 [2024-11-20 11:21:50.977954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:08.004 [2024-11-20 11:21:50.978105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.004 [2024-11-20 11:21:50.978141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:08.004 [2024-11-20 11:21:50.978154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.004 [2024-11-20 11:21:50.978742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.004 [2024-11-20 11:21:50.978774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:08.004 [2024-11-20 11:21:50.978895] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:08.004 [2024-11-20 11:21:50.978931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:08.004 pt2 00:12:08.004 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.004 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:08.004 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:08.004 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:08.004 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.004 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.004 [2024-11-20 11:21:50.985960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:08.004 [2024-11-20 11:21:50.986053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.004 [2024-11-20 11:21:50.986092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:08.004 [2024-11-20 11:21:50.986108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.004 [2024-11-20 11:21:50.986717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.004 [2024-11-20 11:21:50.986762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:08.004 [2024-11-20 11:21:50.986880] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:08.004 [2024-11-20 11:21:50.986911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:08.004 pt3 00:12:08.004 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.004 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:08.004 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:08.004 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:08.004 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.004 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.004 [2024-11-20 11:21:50.993919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:08.004 [2024-11-20 11:21:50.994018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.004 [2024-11-20 11:21:50.994052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:08.004 [2024-11-20 11:21:50.994065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.004 [2024-11-20 11:21:50.994693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.004 [2024-11-20 11:21:50.994724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:08.004 [2024-11-20 11:21:50.994850] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:08.004 [2024-11-20 11:21:50.994888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:08.004 [2024-11-20 11:21:50.995079] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:08.004 [2024-11-20 11:21:50.995096] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:08.004 [2024-11-20 11:21:50.995427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:08.004 [2024-11-20 11:21:50.995675] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:08.004 [2024-11-20 11:21:50.995707] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:08.004 [2024-11-20 11:21:50.995888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.004 pt4 00:12:08.004 11:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.004 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:08.004 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:08.004 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:08.004 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.004 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.004 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.005 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.005 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.005 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.005 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.005 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.005 11:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.005 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.005 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.005 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.005 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.005 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.005 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.005 "name": "raid_bdev1", 00:12:08.005 "uuid": "e8b62b1d-8bcf-46d3-b948-d45b4ef6a32f", 00:12:08.005 "strip_size_kb": 64, 00:12:08.005 "state": "online", 00:12:08.005 "raid_level": "raid0", 00:12:08.005 "superblock": true, 00:12:08.005 "num_base_bdevs": 4, 00:12:08.005 "num_base_bdevs_discovered": 4, 00:12:08.005 "num_base_bdevs_operational": 4, 00:12:08.005 "base_bdevs_list": [ 00:12:08.005 { 00:12:08.005 "name": "pt1", 00:12:08.005 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:08.005 "is_configured": true, 00:12:08.005 "data_offset": 2048, 00:12:08.005 "data_size": 63488 00:12:08.005 }, 00:12:08.005 { 00:12:08.005 "name": "pt2", 00:12:08.005 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:08.005 "is_configured": true, 00:12:08.005 "data_offset": 2048, 00:12:08.005 "data_size": 63488 00:12:08.005 }, 00:12:08.005 { 00:12:08.005 "name": "pt3", 00:12:08.005 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:08.005 "is_configured": true, 00:12:08.005 "data_offset": 2048, 00:12:08.005 "data_size": 63488 00:12:08.005 }, 00:12:08.005 { 00:12:08.005 "name": "pt4", 00:12:08.005 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:08.005 "is_configured": true, 00:12:08.005 "data_offset": 2048, 00:12:08.005 "data_size": 63488 00:12:08.005 } 00:12:08.005 ] 00:12:08.005 }' 00:12:08.005 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.005 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.571 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:08.571 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:08.571 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:08.571 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:08.571 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:08.571 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:08.571 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:08.571 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.571 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:08.571 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.571 [2024-11-20 11:21:51.397786] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:08.571 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.571 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:08.571 "name": "raid_bdev1", 00:12:08.571 "aliases": [ 00:12:08.571 "e8b62b1d-8bcf-46d3-b948-d45b4ef6a32f" 00:12:08.571 ], 00:12:08.571 "product_name": "Raid Volume", 00:12:08.571 "block_size": 512, 00:12:08.571 "num_blocks": 253952, 00:12:08.571 "uuid": "e8b62b1d-8bcf-46d3-b948-d45b4ef6a32f", 00:12:08.571 "assigned_rate_limits": { 00:12:08.571 "rw_ios_per_sec": 0, 00:12:08.571 "rw_mbytes_per_sec": 0, 00:12:08.571 "r_mbytes_per_sec": 0, 00:12:08.571 "w_mbytes_per_sec": 0 00:12:08.571 }, 00:12:08.571 "claimed": false, 00:12:08.571 "zoned": false, 00:12:08.571 "supported_io_types": { 00:12:08.571 "read": true, 00:12:08.571 "write": true, 00:12:08.571 "unmap": true, 00:12:08.571 "flush": true, 00:12:08.571 "reset": true, 00:12:08.571 "nvme_admin": false, 00:12:08.571 "nvme_io": false, 00:12:08.571 "nvme_io_md": false, 00:12:08.571 "write_zeroes": true, 00:12:08.571 "zcopy": false, 00:12:08.571 "get_zone_info": false, 00:12:08.571 "zone_management": false, 00:12:08.571 "zone_append": false, 00:12:08.571 "compare": false, 00:12:08.571 "compare_and_write": false, 00:12:08.571 "abort": false, 00:12:08.571 "seek_hole": false, 00:12:08.571 "seek_data": false, 00:12:08.571 "copy": false, 00:12:08.571 "nvme_iov_md": false 00:12:08.571 }, 00:12:08.571 "memory_domains": [ 00:12:08.571 { 00:12:08.571 "dma_device_id": "system", 00:12:08.571 "dma_device_type": 1 00:12:08.571 }, 00:12:08.571 { 00:12:08.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.571 "dma_device_type": 2 00:12:08.571 }, 00:12:08.571 { 00:12:08.571 "dma_device_id": "system", 00:12:08.571 "dma_device_type": 1 00:12:08.571 }, 00:12:08.571 { 00:12:08.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.571 "dma_device_type": 2 00:12:08.571 }, 00:12:08.571 { 00:12:08.571 "dma_device_id": "system", 00:12:08.571 "dma_device_type": 1 00:12:08.571 }, 00:12:08.571 { 00:12:08.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.571 "dma_device_type": 2 00:12:08.571 }, 00:12:08.571 { 00:12:08.571 "dma_device_id": "system", 00:12:08.571 "dma_device_type": 1 00:12:08.571 }, 00:12:08.571 { 00:12:08.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.571 "dma_device_type": 2 00:12:08.571 } 00:12:08.571 ], 00:12:08.571 "driver_specific": { 00:12:08.571 "raid": { 00:12:08.571 "uuid": "e8b62b1d-8bcf-46d3-b948-d45b4ef6a32f", 00:12:08.571 "strip_size_kb": 64, 00:12:08.571 "state": "online", 00:12:08.571 "raid_level": "raid0", 00:12:08.571 "superblock": true, 00:12:08.572 "num_base_bdevs": 4, 00:12:08.572 "num_base_bdevs_discovered": 4, 00:12:08.572 "num_base_bdevs_operational": 4, 00:12:08.572 "base_bdevs_list": [ 00:12:08.572 { 00:12:08.572 "name": "pt1", 00:12:08.572 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:08.572 "is_configured": true, 00:12:08.572 "data_offset": 2048, 00:12:08.572 "data_size": 63488 00:12:08.572 }, 00:12:08.572 { 00:12:08.572 "name": "pt2", 00:12:08.572 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:08.572 "is_configured": true, 00:12:08.572 "data_offset": 2048, 00:12:08.572 "data_size": 63488 00:12:08.572 }, 00:12:08.572 { 00:12:08.572 "name": "pt3", 00:12:08.572 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:08.572 "is_configured": true, 00:12:08.572 "data_offset": 2048, 00:12:08.572 "data_size": 63488 00:12:08.572 }, 00:12:08.572 { 00:12:08.572 "name": "pt4", 00:12:08.572 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:08.572 "is_configured": true, 00:12:08.572 "data_offset": 2048, 00:12:08.572 "data_size": 63488 00:12:08.572 } 00:12:08.572 ] 00:12:08.572 } 00:12:08.572 } 00:12:08.572 }' 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:08.572 pt2 00:12:08.572 pt3 00:12:08.572 pt4' 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.572 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.852 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:08.852 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:08.852 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.852 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.852 [2024-11-20 11:21:51.693175] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:08.852 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.852 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e8b62b1d-8bcf-46d3-b948-d45b4ef6a32f '!=' e8b62b1d-8bcf-46d3-b948-d45b4ef6a32f ']' 00:12:08.852 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:08.852 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:08.852 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:08.852 11:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70866 00:12:08.852 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70866 ']' 00:12:08.852 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70866 00:12:08.852 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:08.852 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:08.852 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70866 00:12:08.852 killing process with pid 70866 00:12:08.852 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:08.852 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:08.852 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70866' 00:12:08.852 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70866 00:12:08.852 11:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70866 00:12:08.852 [2024-11-20 11:21:51.746504] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:08.852 [2024-11-20 11:21:51.746644] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.852 [2024-11-20 11:21:51.746745] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:08.852 [2024-11-20 11:21:51.746758] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:09.111 [2024-11-20 11:21:52.144464] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:10.512 11:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:10.512 00:12:10.512 real 0m5.451s 00:12:10.512 user 0m7.735s 00:12:10.512 sys 0m0.827s 00:12:10.512 11:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.512 11:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.512 ************************************ 00:12:10.512 END TEST raid_superblock_test 00:12:10.512 ************************************ 00:12:10.512 11:21:53 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:12:10.512 11:21:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:10.512 11:21:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.512 11:21:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:10.512 ************************************ 00:12:10.512 START TEST raid_read_error_test 00:12:10.512 ************************************ 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZRHYXzZJGj 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71128 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71128 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:10.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71128 ']' 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:10.512 11:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.512 [2024-11-20 11:21:53.462635] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:12:10.512 [2024-11-20 11:21:53.462774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71128 ] 00:12:10.772 [2024-11-20 11:21:53.654508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.772 [2024-11-20 11:21:53.772720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.032 [2024-11-20 11:21:53.976885] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:11.032 [2024-11-20 11:21:53.976954] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:11.292 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:11.292 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:11.292 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:11.292 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:11.292 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.292 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.292 BaseBdev1_malloc 00:12:11.292 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.292 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:11.292 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.292 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.292 true 00:12:11.292 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.292 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:11.292 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.292 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.292 [2024-11-20 11:21:54.391139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:11.292 [2024-11-20 11:21:54.391201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.292 [2024-11-20 11:21:54.391223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:11.292 [2024-11-20 11:21:54.391235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.292 [2024-11-20 11:21:54.393349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.292 [2024-11-20 11:21:54.393396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:11.292 BaseBdev1 00:12:11.292 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.292 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:11.292 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:11.292 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.292 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.552 BaseBdev2_malloc 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.552 true 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.552 [2024-11-20 11:21:54.461151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:11.552 [2024-11-20 11:21:54.461276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.552 [2024-11-20 11:21:54.461302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:11.552 [2024-11-20 11:21:54.461314] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.552 [2024-11-20 11:21:54.463443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.552 [2024-11-20 11:21:54.463503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:11.552 BaseBdev2 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.552 BaseBdev3_malloc 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.552 true 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.552 [2024-11-20 11:21:54.543785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:11.552 [2024-11-20 11:21:54.543859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.552 [2024-11-20 11:21:54.543882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:11.552 [2024-11-20 11:21:54.543894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.552 [2024-11-20 11:21:54.546120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.552 [2024-11-20 11:21:54.546168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:11.552 BaseBdev3 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.552 BaseBdev4_malloc 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.552 true 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.552 [2024-11-20 11:21:54.611084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:11.552 [2024-11-20 11:21:54.611152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.552 [2024-11-20 11:21:54.611173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:11.552 [2024-11-20 11:21:54.611184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.552 [2024-11-20 11:21:54.613342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.552 [2024-11-20 11:21:54.613387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:11.552 BaseBdev4 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.552 [2024-11-20 11:21:54.623119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:11.552 [2024-11-20 11:21:54.624928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:11.552 [2024-11-20 11:21:54.625080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:11.552 [2024-11-20 11:21:54.625149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:11.552 [2024-11-20 11:21:54.625362] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:11.552 [2024-11-20 11:21:54.625378] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:11.552 [2024-11-20 11:21:54.625627] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:11.552 [2024-11-20 11:21:54.625783] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:11.552 [2024-11-20 11:21:54.625794] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:11.552 [2024-11-20 11:21:54.625954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.552 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.813 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.813 "name": "raid_bdev1", 00:12:11.813 "uuid": "d5ae77e8-2cb5-4eba-8819-a1adc64e17c1", 00:12:11.813 "strip_size_kb": 64, 00:12:11.813 "state": "online", 00:12:11.813 "raid_level": "raid0", 00:12:11.813 "superblock": true, 00:12:11.813 "num_base_bdevs": 4, 00:12:11.813 "num_base_bdevs_discovered": 4, 00:12:11.813 "num_base_bdevs_operational": 4, 00:12:11.813 "base_bdevs_list": [ 00:12:11.813 { 00:12:11.813 "name": "BaseBdev1", 00:12:11.813 "uuid": "87432d2c-ea1e-52d1-b3cb-5bdfcee0a119", 00:12:11.813 "is_configured": true, 00:12:11.813 "data_offset": 2048, 00:12:11.813 "data_size": 63488 00:12:11.813 }, 00:12:11.813 { 00:12:11.813 "name": "BaseBdev2", 00:12:11.813 "uuid": "56fadcda-72ce-56f3-9ef8-0e58c971c62f", 00:12:11.813 "is_configured": true, 00:12:11.813 "data_offset": 2048, 00:12:11.813 "data_size": 63488 00:12:11.813 }, 00:12:11.813 { 00:12:11.813 "name": "BaseBdev3", 00:12:11.813 "uuid": "bf9f4022-8654-573f-b776-fd2282866590", 00:12:11.813 "is_configured": true, 00:12:11.813 "data_offset": 2048, 00:12:11.813 "data_size": 63488 00:12:11.813 }, 00:12:11.813 { 00:12:11.813 "name": "BaseBdev4", 00:12:11.813 "uuid": "c7ebbde1-05ed-5976-9686-39317ae86f49", 00:12:11.813 "is_configured": true, 00:12:11.813 "data_offset": 2048, 00:12:11.813 "data_size": 63488 00:12:11.813 } 00:12:11.813 ] 00:12:11.813 }' 00:12:11.813 11:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.813 11:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.072 11:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:12.072 11:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:12.072 [2024-11-20 11:21:55.167499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:13.009 11:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:13.009 11:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.009 11:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.009 11:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.009 11:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:13.009 11:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:13.009 11:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:13.009 11:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:13.009 11:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.009 11:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.009 11:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:13.009 11:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.009 11:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.009 11:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.009 11:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.009 11:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.009 11:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.009 11:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.009 11:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.009 11:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.009 11:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.009 11:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.268 11:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.268 "name": "raid_bdev1", 00:12:13.268 "uuid": "d5ae77e8-2cb5-4eba-8819-a1adc64e17c1", 00:12:13.268 "strip_size_kb": 64, 00:12:13.268 "state": "online", 00:12:13.268 "raid_level": "raid0", 00:12:13.268 "superblock": true, 00:12:13.268 "num_base_bdevs": 4, 00:12:13.268 "num_base_bdevs_discovered": 4, 00:12:13.268 "num_base_bdevs_operational": 4, 00:12:13.268 "base_bdevs_list": [ 00:12:13.268 { 00:12:13.268 "name": "BaseBdev1", 00:12:13.268 "uuid": "87432d2c-ea1e-52d1-b3cb-5bdfcee0a119", 00:12:13.268 "is_configured": true, 00:12:13.268 "data_offset": 2048, 00:12:13.268 "data_size": 63488 00:12:13.268 }, 00:12:13.268 { 00:12:13.268 "name": "BaseBdev2", 00:12:13.268 "uuid": "56fadcda-72ce-56f3-9ef8-0e58c971c62f", 00:12:13.268 "is_configured": true, 00:12:13.268 "data_offset": 2048, 00:12:13.268 "data_size": 63488 00:12:13.268 }, 00:12:13.268 { 00:12:13.268 "name": "BaseBdev3", 00:12:13.268 "uuid": "bf9f4022-8654-573f-b776-fd2282866590", 00:12:13.268 "is_configured": true, 00:12:13.268 "data_offset": 2048, 00:12:13.268 "data_size": 63488 00:12:13.268 }, 00:12:13.268 { 00:12:13.268 "name": "BaseBdev4", 00:12:13.268 "uuid": "c7ebbde1-05ed-5976-9686-39317ae86f49", 00:12:13.268 "is_configured": true, 00:12:13.268 "data_offset": 2048, 00:12:13.268 "data_size": 63488 00:12:13.268 } 00:12:13.268 ] 00:12:13.268 }' 00:12:13.268 11:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.268 11:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.527 11:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:13.527 11:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.527 11:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.527 [2024-11-20 11:21:56.548399] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:13.527 [2024-11-20 11:21:56.548544] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:13.527 [2024-11-20 11:21:56.551737] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:13.527 [2024-11-20 11:21:56.551848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.527 [2024-11-20 11:21:56.551920] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:13.527 [2024-11-20 11:21:56.552000] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:13.527 11:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.527 { 00:12:13.527 "results": [ 00:12:13.527 { 00:12:13.527 "job": "raid_bdev1", 00:12:13.527 "core_mask": "0x1", 00:12:13.527 "workload": "randrw", 00:12:13.527 "percentage": 50, 00:12:13.527 "status": "finished", 00:12:13.527 "queue_depth": 1, 00:12:13.527 "io_size": 131072, 00:12:13.527 "runtime": 1.381636, 00:12:13.527 "iops": 14945.325686360227, 00:12:13.527 "mibps": 1868.1657107950284, 00:12:13.527 "io_failed": 1, 00:12:13.527 "io_timeout": 0, 00:12:13.527 "avg_latency_us": 93.05839144823794, 00:12:13.527 "min_latency_us": 27.053275109170304, 00:12:13.527 "max_latency_us": 1488.1537117903931 00:12:13.527 } 00:12:13.527 ], 00:12:13.527 "core_count": 1 00:12:13.527 } 00:12:13.527 11:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71128 00:12:13.527 11:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71128 ']' 00:12:13.527 11:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71128 00:12:13.527 11:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:13.527 11:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:13.527 11:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71128 00:12:13.527 11:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:13.527 11:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:13.527 11:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71128' 00:12:13.527 killing process with pid 71128 00:12:13.527 11:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71128 00:12:13.527 [2024-11-20 11:21:56.600328] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:13.527 11:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71128 00:12:14.093 [2024-11-20 11:21:56.939744] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:15.030 11:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZRHYXzZJGj 00:12:15.030 11:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:15.030 11:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:15.292 11:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:12:15.292 11:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:15.292 ************************************ 00:12:15.292 END TEST raid_read_error_test 00:12:15.292 ************************************ 00:12:15.292 11:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:15.292 11:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:15.292 11:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:12:15.292 00:12:15.292 real 0m4.806s 00:12:15.292 user 0m5.671s 00:12:15.292 sys 0m0.630s 00:12:15.292 11:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.292 11:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.292 11:21:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:12:15.292 11:21:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:15.292 11:21:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.292 11:21:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:15.292 ************************************ 00:12:15.292 START TEST raid_write_error_test 00:12:15.292 ************************************ 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pIrGNAjRYZ 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71274 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71274 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71274 ']' 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.292 11:21:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.292 [2024-11-20 11:21:58.335788] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:12:15.292 [2024-11-20 11:21:58.336044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71274 ] 00:12:15.551 [2024-11-20 11:21:58.520862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.551 [2024-11-20 11:21:58.648068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.810 [2024-11-20 11:21:58.863451] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.810 [2024-11-20 11:21:58.863633] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.379 BaseBdev1_malloc 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.379 true 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.379 [2024-11-20 11:21:59.261887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:16.379 [2024-11-20 11:21:59.261947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.379 [2024-11-20 11:21:59.261966] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:16.379 [2024-11-20 11:21:59.261977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.379 [2024-11-20 11:21:59.264151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.379 [2024-11-20 11:21:59.264197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:16.379 BaseBdev1 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.379 BaseBdev2_malloc 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.379 true 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.379 [2024-11-20 11:21:59.329220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:16.379 [2024-11-20 11:21:59.329277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.379 [2024-11-20 11:21:59.329293] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:16.379 [2024-11-20 11:21:59.329305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.379 [2024-11-20 11:21:59.331580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.379 [2024-11-20 11:21:59.331623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:16.379 BaseBdev2 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.379 BaseBdev3_malloc 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.379 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:16.380 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.380 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.380 true 00:12:16.380 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.380 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:16.380 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.380 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.380 [2024-11-20 11:21:59.407966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:16.380 [2024-11-20 11:21:59.408020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.380 [2024-11-20 11:21:59.408038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:16.380 [2024-11-20 11:21:59.408049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.380 [2024-11-20 11:21:59.410237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.380 [2024-11-20 11:21:59.410278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:16.380 BaseBdev3 00:12:16.380 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.380 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:16.380 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:16.380 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.380 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.380 BaseBdev4_malloc 00:12:16.380 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.380 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:16.380 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.380 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.380 true 00:12:16.380 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.380 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:16.380 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.380 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.380 [2024-11-20 11:21:59.476981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:16.380 [2024-11-20 11:21:59.477038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.380 [2024-11-20 11:21:59.477057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:16.380 [2024-11-20 11:21:59.477068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.380 [2024-11-20 11:21:59.479308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.380 [2024-11-20 11:21:59.479432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:16.380 BaseBdev4 00:12:16.380 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.380 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:16.380 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.380 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.380 [2024-11-20 11:21:59.489019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:16.380 [2024-11-20 11:21:59.490844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:16.380 [2024-11-20 11:21:59.490924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:16.380 [2024-11-20 11:21:59.490989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:16.380 [2024-11-20 11:21:59.491224] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:16.380 [2024-11-20 11:21:59.491240] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:16.380 [2024-11-20 11:21:59.491487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:16.380 [2024-11-20 11:21:59.491643] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:16.380 [2024-11-20 11:21:59.491655] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:16.380 [2024-11-20 11:21:59.491810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.639 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.639 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:16.639 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.639 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.639 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:16.639 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.639 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.639 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.640 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.640 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.640 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.640 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.640 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.640 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.640 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.640 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.640 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.640 "name": "raid_bdev1", 00:12:16.640 "uuid": "048e462d-3d30-4158-bac3-3465e8640a68", 00:12:16.640 "strip_size_kb": 64, 00:12:16.640 "state": "online", 00:12:16.640 "raid_level": "raid0", 00:12:16.640 "superblock": true, 00:12:16.640 "num_base_bdevs": 4, 00:12:16.640 "num_base_bdevs_discovered": 4, 00:12:16.640 "num_base_bdevs_operational": 4, 00:12:16.640 "base_bdevs_list": [ 00:12:16.640 { 00:12:16.640 "name": "BaseBdev1", 00:12:16.640 "uuid": "9da35b06-eb04-57f0-b12a-1ef707eae546", 00:12:16.640 "is_configured": true, 00:12:16.640 "data_offset": 2048, 00:12:16.640 "data_size": 63488 00:12:16.640 }, 00:12:16.640 { 00:12:16.640 "name": "BaseBdev2", 00:12:16.640 "uuid": "79ddc69b-8815-5194-96f3-eb65ba1fe4e8", 00:12:16.640 "is_configured": true, 00:12:16.640 "data_offset": 2048, 00:12:16.640 "data_size": 63488 00:12:16.640 }, 00:12:16.640 { 00:12:16.640 "name": "BaseBdev3", 00:12:16.640 "uuid": "5cff135e-8ad6-546f-82b9-b5c21a4583de", 00:12:16.640 "is_configured": true, 00:12:16.640 "data_offset": 2048, 00:12:16.640 "data_size": 63488 00:12:16.640 }, 00:12:16.640 { 00:12:16.640 "name": "BaseBdev4", 00:12:16.640 "uuid": "aee4c711-beb5-5a8f-b689-2b68c803dfe4", 00:12:16.640 "is_configured": true, 00:12:16.640 "data_offset": 2048, 00:12:16.640 "data_size": 63488 00:12:16.640 } 00:12:16.640 ] 00:12:16.640 }' 00:12:16.640 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.640 11:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.899 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:16.899 11:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:17.159 [2024-11-20 11:22:00.053421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:18.099 11:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:18.099 11:22:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.099 11:22:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.099 11:22:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.099 11:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:18.099 11:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:18.099 11:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:18.099 11:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:18.099 11:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.099 11:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.099 11:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:18.099 11:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.099 11:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.099 11:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.099 11:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.099 11:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.099 11:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.099 11:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.099 11:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.099 11:22:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.099 11:22:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.099 11:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.099 11:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.099 "name": "raid_bdev1", 00:12:18.099 "uuid": "048e462d-3d30-4158-bac3-3465e8640a68", 00:12:18.099 "strip_size_kb": 64, 00:12:18.099 "state": "online", 00:12:18.099 "raid_level": "raid0", 00:12:18.099 "superblock": true, 00:12:18.099 "num_base_bdevs": 4, 00:12:18.099 "num_base_bdevs_discovered": 4, 00:12:18.099 "num_base_bdevs_operational": 4, 00:12:18.099 "base_bdevs_list": [ 00:12:18.099 { 00:12:18.099 "name": "BaseBdev1", 00:12:18.099 "uuid": "9da35b06-eb04-57f0-b12a-1ef707eae546", 00:12:18.099 "is_configured": true, 00:12:18.099 "data_offset": 2048, 00:12:18.099 "data_size": 63488 00:12:18.099 }, 00:12:18.099 { 00:12:18.099 "name": "BaseBdev2", 00:12:18.099 "uuid": "79ddc69b-8815-5194-96f3-eb65ba1fe4e8", 00:12:18.099 "is_configured": true, 00:12:18.099 "data_offset": 2048, 00:12:18.099 "data_size": 63488 00:12:18.099 }, 00:12:18.099 { 00:12:18.099 "name": "BaseBdev3", 00:12:18.099 "uuid": "5cff135e-8ad6-546f-82b9-b5c21a4583de", 00:12:18.099 "is_configured": true, 00:12:18.099 "data_offset": 2048, 00:12:18.099 "data_size": 63488 00:12:18.099 }, 00:12:18.099 { 00:12:18.099 "name": "BaseBdev4", 00:12:18.099 "uuid": "aee4c711-beb5-5a8f-b689-2b68c803dfe4", 00:12:18.099 "is_configured": true, 00:12:18.099 "data_offset": 2048, 00:12:18.099 "data_size": 63488 00:12:18.099 } 00:12:18.099 ] 00:12:18.099 }' 00:12:18.099 11:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.099 11:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.666 11:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:18.666 11:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.666 11:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.666 [2024-11-20 11:22:01.490696] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:18.666 [2024-11-20 11:22:01.490806] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.666 [2024-11-20 11:22:01.494074] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.666 [2024-11-20 11:22:01.494202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.666 [2024-11-20 11:22:01.494287] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.666 [2024-11-20 11:22:01.494343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:18.666 { 00:12:18.666 "results": [ 00:12:18.666 { 00:12:18.666 "job": "raid_bdev1", 00:12:18.666 "core_mask": "0x1", 00:12:18.666 "workload": "randrw", 00:12:18.666 "percentage": 50, 00:12:18.666 "status": "finished", 00:12:18.666 "queue_depth": 1, 00:12:18.666 "io_size": 131072, 00:12:18.666 "runtime": 1.438238, 00:12:18.666 "iops": 14013.675066296399, 00:12:18.666 "mibps": 1751.7093832870498, 00:12:18.666 "io_failed": 1, 00:12:18.666 "io_timeout": 0, 00:12:18.666 "avg_latency_us": 99.1082704251814, 00:12:18.666 "min_latency_us": 26.606113537117903, 00:12:18.666 "max_latency_us": 1724.2550218340612 00:12:18.666 } 00:12:18.666 ], 00:12:18.666 "core_count": 1 00:12:18.666 } 00:12:18.667 11:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.667 11:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71274 00:12:18.667 11:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71274 ']' 00:12:18.667 11:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71274 00:12:18.667 11:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:18.667 11:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.667 11:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71274 00:12:18.667 killing process with pid 71274 00:12:18.667 11:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.667 11:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.667 11:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71274' 00:12:18.667 11:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71274 00:12:18.667 [2024-11-20 11:22:01.543309] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:18.667 11:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71274 00:12:18.925 [2024-11-20 11:22:01.942024] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:20.305 11:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pIrGNAjRYZ 00:12:20.305 11:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:20.305 11:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:20.305 ************************************ 00:12:20.305 END TEST raid_write_error_test 00:12:20.305 ************************************ 00:12:20.305 11:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:12:20.305 11:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:20.305 11:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:20.305 11:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:20.305 11:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:12:20.305 00:12:20.305 real 0m5.058s 00:12:20.305 user 0m5.980s 00:12:20.305 sys 0m0.621s 00:12:20.305 11:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.305 11:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.305 11:22:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:20.305 11:22:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:12:20.305 11:22:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:20.305 11:22:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.305 11:22:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:20.305 ************************************ 00:12:20.305 START TEST raid_state_function_test 00:12:20.305 ************************************ 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71425 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:20.305 Process raid pid: 71425 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71425' 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71425 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71425 ']' 00:12:20.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.305 11:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.564 [2024-11-20 11:22:03.461243] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:12:20.564 [2024-11-20 11:22:03.461512] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.564 [2024-11-20 11:22:03.645004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.822 [2024-11-20 11:22:03.767205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.080 [2024-11-20 11:22:03.991484] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.080 [2024-11-20 11:22:03.991558] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.339 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.340 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:21.340 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:21.340 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.340 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.340 [2024-11-20 11:22:04.297613] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:21.340 [2024-11-20 11:22:04.297673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:21.340 [2024-11-20 11:22:04.297685] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:21.340 [2024-11-20 11:22:04.297695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:21.340 [2024-11-20 11:22:04.297701] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:21.340 [2024-11-20 11:22:04.297710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:21.340 [2024-11-20 11:22:04.297716] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:21.340 [2024-11-20 11:22:04.297725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:21.340 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.340 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:21.340 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.340 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.340 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:21.340 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.340 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.340 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.340 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.340 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.340 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.340 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.340 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.340 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.340 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.340 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.340 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.340 "name": "Existed_Raid", 00:12:21.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.340 "strip_size_kb": 64, 00:12:21.340 "state": "configuring", 00:12:21.340 "raid_level": "concat", 00:12:21.340 "superblock": false, 00:12:21.340 "num_base_bdevs": 4, 00:12:21.340 "num_base_bdevs_discovered": 0, 00:12:21.340 "num_base_bdevs_operational": 4, 00:12:21.340 "base_bdevs_list": [ 00:12:21.340 { 00:12:21.340 "name": "BaseBdev1", 00:12:21.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.340 "is_configured": false, 00:12:21.340 "data_offset": 0, 00:12:21.340 "data_size": 0 00:12:21.340 }, 00:12:21.340 { 00:12:21.340 "name": "BaseBdev2", 00:12:21.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.340 "is_configured": false, 00:12:21.340 "data_offset": 0, 00:12:21.340 "data_size": 0 00:12:21.340 }, 00:12:21.340 { 00:12:21.340 "name": "BaseBdev3", 00:12:21.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.340 "is_configured": false, 00:12:21.340 "data_offset": 0, 00:12:21.340 "data_size": 0 00:12:21.340 }, 00:12:21.340 { 00:12:21.340 "name": "BaseBdev4", 00:12:21.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.340 "is_configured": false, 00:12:21.340 "data_offset": 0, 00:12:21.340 "data_size": 0 00:12:21.340 } 00:12:21.340 ] 00:12:21.340 }' 00:12:21.340 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.340 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.906 [2024-11-20 11:22:04.744755] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:21.906 [2024-11-20 11:22:04.744866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.906 [2024-11-20 11:22:04.752737] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:21.906 [2024-11-20 11:22:04.752830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:21.906 [2024-11-20 11:22:04.752907] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:21.906 [2024-11-20 11:22:04.752944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:21.906 [2024-11-20 11:22:04.752986] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:21.906 [2024-11-20 11:22:04.753019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:21.906 [2024-11-20 11:22:04.753054] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:21.906 [2024-11-20 11:22:04.753082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.906 [2024-11-20 11:22:04.799606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.906 BaseBdev1 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.906 [ 00:12:21.906 { 00:12:21.906 "name": "BaseBdev1", 00:12:21.906 "aliases": [ 00:12:21.906 "e4f255c3-fc56-4e58-8719-4bb9979fcc97" 00:12:21.906 ], 00:12:21.906 "product_name": "Malloc disk", 00:12:21.906 "block_size": 512, 00:12:21.906 "num_blocks": 65536, 00:12:21.906 "uuid": "e4f255c3-fc56-4e58-8719-4bb9979fcc97", 00:12:21.906 "assigned_rate_limits": { 00:12:21.906 "rw_ios_per_sec": 0, 00:12:21.906 "rw_mbytes_per_sec": 0, 00:12:21.906 "r_mbytes_per_sec": 0, 00:12:21.906 "w_mbytes_per_sec": 0 00:12:21.906 }, 00:12:21.906 "claimed": true, 00:12:21.906 "claim_type": "exclusive_write", 00:12:21.906 "zoned": false, 00:12:21.906 "supported_io_types": { 00:12:21.906 "read": true, 00:12:21.906 "write": true, 00:12:21.906 "unmap": true, 00:12:21.906 "flush": true, 00:12:21.906 "reset": true, 00:12:21.906 "nvme_admin": false, 00:12:21.906 "nvme_io": false, 00:12:21.906 "nvme_io_md": false, 00:12:21.906 "write_zeroes": true, 00:12:21.906 "zcopy": true, 00:12:21.906 "get_zone_info": false, 00:12:21.906 "zone_management": false, 00:12:21.906 "zone_append": false, 00:12:21.906 "compare": false, 00:12:21.906 "compare_and_write": false, 00:12:21.906 "abort": true, 00:12:21.906 "seek_hole": false, 00:12:21.906 "seek_data": false, 00:12:21.906 "copy": true, 00:12:21.906 "nvme_iov_md": false 00:12:21.906 }, 00:12:21.906 "memory_domains": [ 00:12:21.906 { 00:12:21.906 "dma_device_id": "system", 00:12:21.906 "dma_device_type": 1 00:12:21.906 }, 00:12:21.906 { 00:12:21.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.906 "dma_device_type": 2 00:12:21.906 } 00:12:21.906 ], 00:12:21.906 "driver_specific": {} 00:12:21.906 } 00:12:21.906 ] 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.906 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.906 "name": "Existed_Raid", 00:12:21.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.906 "strip_size_kb": 64, 00:12:21.906 "state": "configuring", 00:12:21.906 "raid_level": "concat", 00:12:21.906 "superblock": false, 00:12:21.906 "num_base_bdevs": 4, 00:12:21.906 "num_base_bdevs_discovered": 1, 00:12:21.907 "num_base_bdevs_operational": 4, 00:12:21.907 "base_bdevs_list": [ 00:12:21.907 { 00:12:21.907 "name": "BaseBdev1", 00:12:21.907 "uuid": "e4f255c3-fc56-4e58-8719-4bb9979fcc97", 00:12:21.907 "is_configured": true, 00:12:21.907 "data_offset": 0, 00:12:21.907 "data_size": 65536 00:12:21.907 }, 00:12:21.907 { 00:12:21.907 "name": "BaseBdev2", 00:12:21.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.907 "is_configured": false, 00:12:21.907 "data_offset": 0, 00:12:21.907 "data_size": 0 00:12:21.907 }, 00:12:21.907 { 00:12:21.907 "name": "BaseBdev3", 00:12:21.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.907 "is_configured": false, 00:12:21.907 "data_offset": 0, 00:12:21.907 "data_size": 0 00:12:21.907 }, 00:12:21.907 { 00:12:21.907 "name": "BaseBdev4", 00:12:21.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.907 "is_configured": false, 00:12:21.907 "data_offset": 0, 00:12:21.907 "data_size": 0 00:12:21.907 } 00:12:21.907 ] 00:12:21.907 }' 00:12:21.907 11:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.907 11:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.471 [2024-11-20 11:22:05.322872] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:22.471 [2024-11-20 11:22:05.322936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.471 [2024-11-20 11:22:05.334972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:22.471 [2024-11-20 11:22:05.337055] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:22.471 [2024-11-20 11:22:05.337182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:22.471 [2024-11-20 11:22:05.337200] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:22.471 [2024-11-20 11:22:05.337214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:22.471 [2024-11-20 11:22:05.337223] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:22.471 [2024-11-20 11:22:05.337234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.471 "name": "Existed_Raid", 00:12:22.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.471 "strip_size_kb": 64, 00:12:22.471 "state": "configuring", 00:12:22.471 "raid_level": "concat", 00:12:22.471 "superblock": false, 00:12:22.471 "num_base_bdevs": 4, 00:12:22.471 "num_base_bdevs_discovered": 1, 00:12:22.471 "num_base_bdevs_operational": 4, 00:12:22.471 "base_bdevs_list": [ 00:12:22.471 { 00:12:22.471 "name": "BaseBdev1", 00:12:22.471 "uuid": "e4f255c3-fc56-4e58-8719-4bb9979fcc97", 00:12:22.471 "is_configured": true, 00:12:22.471 "data_offset": 0, 00:12:22.471 "data_size": 65536 00:12:22.471 }, 00:12:22.471 { 00:12:22.471 "name": "BaseBdev2", 00:12:22.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.471 "is_configured": false, 00:12:22.471 "data_offset": 0, 00:12:22.471 "data_size": 0 00:12:22.471 }, 00:12:22.471 { 00:12:22.471 "name": "BaseBdev3", 00:12:22.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.471 "is_configured": false, 00:12:22.471 "data_offset": 0, 00:12:22.471 "data_size": 0 00:12:22.471 }, 00:12:22.471 { 00:12:22.471 "name": "BaseBdev4", 00:12:22.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.471 "is_configured": false, 00:12:22.471 "data_offset": 0, 00:12:22.471 "data_size": 0 00:12:22.471 } 00:12:22.471 ] 00:12:22.471 }' 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.471 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.037 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:23.037 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.037 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.037 [2024-11-20 11:22:05.885815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:23.037 BaseBdev2 00:12:23.037 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.037 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:23.037 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:23.037 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.038 [ 00:12:23.038 { 00:12:23.038 "name": "BaseBdev2", 00:12:23.038 "aliases": [ 00:12:23.038 "7673bc29-1c29-4afc-b6df-6d1fee69f6bf" 00:12:23.038 ], 00:12:23.038 "product_name": "Malloc disk", 00:12:23.038 "block_size": 512, 00:12:23.038 "num_blocks": 65536, 00:12:23.038 "uuid": "7673bc29-1c29-4afc-b6df-6d1fee69f6bf", 00:12:23.038 "assigned_rate_limits": { 00:12:23.038 "rw_ios_per_sec": 0, 00:12:23.038 "rw_mbytes_per_sec": 0, 00:12:23.038 "r_mbytes_per_sec": 0, 00:12:23.038 "w_mbytes_per_sec": 0 00:12:23.038 }, 00:12:23.038 "claimed": true, 00:12:23.038 "claim_type": "exclusive_write", 00:12:23.038 "zoned": false, 00:12:23.038 "supported_io_types": { 00:12:23.038 "read": true, 00:12:23.038 "write": true, 00:12:23.038 "unmap": true, 00:12:23.038 "flush": true, 00:12:23.038 "reset": true, 00:12:23.038 "nvme_admin": false, 00:12:23.038 "nvme_io": false, 00:12:23.038 "nvme_io_md": false, 00:12:23.038 "write_zeroes": true, 00:12:23.038 "zcopy": true, 00:12:23.038 "get_zone_info": false, 00:12:23.038 "zone_management": false, 00:12:23.038 "zone_append": false, 00:12:23.038 "compare": false, 00:12:23.038 "compare_and_write": false, 00:12:23.038 "abort": true, 00:12:23.038 "seek_hole": false, 00:12:23.038 "seek_data": false, 00:12:23.038 "copy": true, 00:12:23.038 "nvme_iov_md": false 00:12:23.038 }, 00:12:23.038 "memory_domains": [ 00:12:23.038 { 00:12:23.038 "dma_device_id": "system", 00:12:23.038 "dma_device_type": 1 00:12:23.038 }, 00:12:23.038 { 00:12:23.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.038 "dma_device_type": 2 00:12:23.038 } 00:12:23.038 ], 00:12:23.038 "driver_specific": {} 00:12:23.038 } 00:12:23.038 ] 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.038 "name": "Existed_Raid", 00:12:23.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.038 "strip_size_kb": 64, 00:12:23.038 "state": "configuring", 00:12:23.038 "raid_level": "concat", 00:12:23.038 "superblock": false, 00:12:23.038 "num_base_bdevs": 4, 00:12:23.038 "num_base_bdevs_discovered": 2, 00:12:23.038 "num_base_bdevs_operational": 4, 00:12:23.038 "base_bdevs_list": [ 00:12:23.038 { 00:12:23.038 "name": "BaseBdev1", 00:12:23.038 "uuid": "e4f255c3-fc56-4e58-8719-4bb9979fcc97", 00:12:23.038 "is_configured": true, 00:12:23.038 "data_offset": 0, 00:12:23.038 "data_size": 65536 00:12:23.038 }, 00:12:23.038 { 00:12:23.038 "name": "BaseBdev2", 00:12:23.038 "uuid": "7673bc29-1c29-4afc-b6df-6d1fee69f6bf", 00:12:23.038 "is_configured": true, 00:12:23.038 "data_offset": 0, 00:12:23.038 "data_size": 65536 00:12:23.038 }, 00:12:23.038 { 00:12:23.038 "name": "BaseBdev3", 00:12:23.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.038 "is_configured": false, 00:12:23.038 "data_offset": 0, 00:12:23.038 "data_size": 0 00:12:23.038 }, 00:12:23.038 { 00:12:23.038 "name": "BaseBdev4", 00:12:23.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.038 "is_configured": false, 00:12:23.038 "data_offset": 0, 00:12:23.038 "data_size": 0 00:12:23.038 } 00:12:23.038 ] 00:12:23.038 }' 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.038 11:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.297 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:23.297 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.297 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.297 [2024-11-20 11:22:06.385688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:23.298 BaseBdev3 00:12:23.298 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.298 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:23.298 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:23.298 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:23.298 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:23.298 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:23.298 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:23.298 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:23.298 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.298 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.298 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.298 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:23.298 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.298 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.298 [ 00:12:23.298 { 00:12:23.298 "name": "BaseBdev3", 00:12:23.298 "aliases": [ 00:12:23.298 "570f9a97-b582-4323-80c1-81c34b329c9f" 00:12:23.298 ], 00:12:23.556 "product_name": "Malloc disk", 00:12:23.556 "block_size": 512, 00:12:23.556 "num_blocks": 65536, 00:12:23.556 "uuid": "570f9a97-b582-4323-80c1-81c34b329c9f", 00:12:23.556 "assigned_rate_limits": { 00:12:23.556 "rw_ios_per_sec": 0, 00:12:23.556 "rw_mbytes_per_sec": 0, 00:12:23.556 "r_mbytes_per_sec": 0, 00:12:23.556 "w_mbytes_per_sec": 0 00:12:23.556 }, 00:12:23.556 "claimed": true, 00:12:23.556 "claim_type": "exclusive_write", 00:12:23.556 "zoned": false, 00:12:23.556 "supported_io_types": { 00:12:23.556 "read": true, 00:12:23.556 "write": true, 00:12:23.556 "unmap": true, 00:12:23.556 "flush": true, 00:12:23.556 "reset": true, 00:12:23.556 "nvme_admin": false, 00:12:23.556 "nvme_io": false, 00:12:23.556 "nvme_io_md": false, 00:12:23.556 "write_zeroes": true, 00:12:23.556 "zcopy": true, 00:12:23.556 "get_zone_info": false, 00:12:23.556 "zone_management": false, 00:12:23.556 "zone_append": false, 00:12:23.556 "compare": false, 00:12:23.556 "compare_and_write": false, 00:12:23.556 "abort": true, 00:12:23.556 "seek_hole": false, 00:12:23.556 "seek_data": false, 00:12:23.556 "copy": true, 00:12:23.556 "nvme_iov_md": false 00:12:23.556 }, 00:12:23.556 "memory_domains": [ 00:12:23.556 { 00:12:23.556 "dma_device_id": "system", 00:12:23.556 "dma_device_type": 1 00:12:23.556 }, 00:12:23.556 { 00:12:23.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.556 "dma_device_type": 2 00:12:23.556 } 00:12:23.556 ], 00:12:23.556 "driver_specific": {} 00:12:23.556 } 00:12:23.556 ] 00:12:23.556 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.556 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:23.556 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:23.556 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:23.556 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:23.556 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.556 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.556 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:23.556 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.556 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.556 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.556 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.556 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.556 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.556 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.556 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.556 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.556 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.556 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.556 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.556 "name": "Existed_Raid", 00:12:23.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.556 "strip_size_kb": 64, 00:12:23.556 "state": "configuring", 00:12:23.556 "raid_level": "concat", 00:12:23.556 "superblock": false, 00:12:23.556 "num_base_bdevs": 4, 00:12:23.556 "num_base_bdevs_discovered": 3, 00:12:23.556 "num_base_bdevs_operational": 4, 00:12:23.556 "base_bdevs_list": [ 00:12:23.556 { 00:12:23.556 "name": "BaseBdev1", 00:12:23.556 "uuid": "e4f255c3-fc56-4e58-8719-4bb9979fcc97", 00:12:23.556 "is_configured": true, 00:12:23.556 "data_offset": 0, 00:12:23.556 "data_size": 65536 00:12:23.556 }, 00:12:23.556 { 00:12:23.556 "name": "BaseBdev2", 00:12:23.556 "uuid": "7673bc29-1c29-4afc-b6df-6d1fee69f6bf", 00:12:23.556 "is_configured": true, 00:12:23.556 "data_offset": 0, 00:12:23.556 "data_size": 65536 00:12:23.556 }, 00:12:23.556 { 00:12:23.556 "name": "BaseBdev3", 00:12:23.556 "uuid": "570f9a97-b582-4323-80c1-81c34b329c9f", 00:12:23.556 "is_configured": true, 00:12:23.556 "data_offset": 0, 00:12:23.556 "data_size": 65536 00:12:23.556 }, 00:12:23.556 { 00:12:23.556 "name": "BaseBdev4", 00:12:23.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.556 "is_configured": false, 00:12:23.556 "data_offset": 0, 00:12:23.556 "data_size": 0 00:12:23.556 } 00:12:23.556 ] 00:12:23.556 }' 00:12:23.556 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.556 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.814 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:23.814 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.814 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.814 [2024-11-20 11:22:06.913954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:23.814 [2024-11-20 11:22:06.914121] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:23.814 [2024-11-20 11:22:06.914149] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:23.814 [2024-11-20 11:22:06.914490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:23.814 [2024-11-20 11:22:06.914707] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:23.814 [2024-11-20 11:22:06.914761] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:23.815 [2024-11-20 11:22:06.915081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.815 BaseBdev4 00:12:23.815 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.815 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:23.815 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:23.815 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:23.815 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:23.815 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:23.815 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:23.815 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:23.815 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.815 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.073 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.073 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:24.073 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.073 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.073 [ 00:12:24.073 { 00:12:24.073 "name": "BaseBdev4", 00:12:24.073 "aliases": [ 00:12:24.073 "18d8cc59-aced-450f-af72-e5fc294907ea" 00:12:24.073 ], 00:12:24.073 "product_name": "Malloc disk", 00:12:24.073 "block_size": 512, 00:12:24.073 "num_blocks": 65536, 00:12:24.073 "uuid": "18d8cc59-aced-450f-af72-e5fc294907ea", 00:12:24.073 "assigned_rate_limits": { 00:12:24.073 "rw_ios_per_sec": 0, 00:12:24.073 "rw_mbytes_per_sec": 0, 00:12:24.073 "r_mbytes_per_sec": 0, 00:12:24.073 "w_mbytes_per_sec": 0 00:12:24.073 }, 00:12:24.073 "claimed": true, 00:12:24.073 "claim_type": "exclusive_write", 00:12:24.073 "zoned": false, 00:12:24.073 "supported_io_types": { 00:12:24.073 "read": true, 00:12:24.073 "write": true, 00:12:24.073 "unmap": true, 00:12:24.073 "flush": true, 00:12:24.073 "reset": true, 00:12:24.073 "nvme_admin": false, 00:12:24.073 "nvme_io": false, 00:12:24.073 "nvme_io_md": false, 00:12:24.073 "write_zeroes": true, 00:12:24.073 "zcopy": true, 00:12:24.073 "get_zone_info": false, 00:12:24.073 "zone_management": false, 00:12:24.073 "zone_append": false, 00:12:24.073 "compare": false, 00:12:24.073 "compare_and_write": false, 00:12:24.073 "abort": true, 00:12:24.073 "seek_hole": false, 00:12:24.073 "seek_data": false, 00:12:24.073 "copy": true, 00:12:24.073 "nvme_iov_md": false 00:12:24.073 }, 00:12:24.073 "memory_domains": [ 00:12:24.073 { 00:12:24.073 "dma_device_id": "system", 00:12:24.073 "dma_device_type": 1 00:12:24.074 }, 00:12:24.074 { 00:12:24.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.074 "dma_device_type": 2 00:12:24.074 } 00:12:24.074 ], 00:12:24.074 "driver_specific": {} 00:12:24.074 } 00:12:24.074 ] 00:12:24.074 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.074 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:24.074 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:24.074 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:24.074 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:24.074 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.074 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.074 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:24.074 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.074 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.074 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.074 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.074 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.074 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.074 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.074 11:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.074 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.074 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.074 11:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.074 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.074 "name": "Existed_Raid", 00:12:24.074 "uuid": "d7bb70af-6f94-48af-b5f9-6f05fa333c96", 00:12:24.074 "strip_size_kb": 64, 00:12:24.074 "state": "online", 00:12:24.074 "raid_level": "concat", 00:12:24.074 "superblock": false, 00:12:24.074 "num_base_bdevs": 4, 00:12:24.074 "num_base_bdevs_discovered": 4, 00:12:24.074 "num_base_bdevs_operational": 4, 00:12:24.074 "base_bdevs_list": [ 00:12:24.074 { 00:12:24.074 "name": "BaseBdev1", 00:12:24.074 "uuid": "e4f255c3-fc56-4e58-8719-4bb9979fcc97", 00:12:24.074 "is_configured": true, 00:12:24.074 "data_offset": 0, 00:12:24.074 "data_size": 65536 00:12:24.074 }, 00:12:24.074 { 00:12:24.074 "name": "BaseBdev2", 00:12:24.074 "uuid": "7673bc29-1c29-4afc-b6df-6d1fee69f6bf", 00:12:24.074 "is_configured": true, 00:12:24.074 "data_offset": 0, 00:12:24.074 "data_size": 65536 00:12:24.074 }, 00:12:24.074 { 00:12:24.074 "name": "BaseBdev3", 00:12:24.074 "uuid": "570f9a97-b582-4323-80c1-81c34b329c9f", 00:12:24.074 "is_configured": true, 00:12:24.074 "data_offset": 0, 00:12:24.074 "data_size": 65536 00:12:24.074 }, 00:12:24.074 { 00:12:24.074 "name": "BaseBdev4", 00:12:24.074 "uuid": "18d8cc59-aced-450f-af72-e5fc294907ea", 00:12:24.074 "is_configured": true, 00:12:24.074 "data_offset": 0, 00:12:24.074 "data_size": 65536 00:12:24.074 } 00:12:24.074 ] 00:12:24.074 }' 00:12:24.074 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.074 11:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.385 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:24.385 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:24.385 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:24.385 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:24.385 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:24.386 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:24.386 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:24.386 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:24.386 11:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.386 11:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.386 [2024-11-20 11:22:07.441509] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:24.386 11:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.386 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:24.386 "name": "Existed_Raid", 00:12:24.386 "aliases": [ 00:12:24.386 "d7bb70af-6f94-48af-b5f9-6f05fa333c96" 00:12:24.386 ], 00:12:24.386 "product_name": "Raid Volume", 00:12:24.386 "block_size": 512, 00:12:24.386 "num_blocks": 262144, 00:12:24.386 "uuid": "d7bb70af-6f94-48af-b5f9-6f05fa333c96", 00:12:24.386 "assigned_rate_limits": { 00:12:24.386 "rw_ios_per_sec": 0, 00:12:24.386 "rw_mbytes_per_sec": 0, 00:12:24.386 "r_mbytes_per_sec": 0, 00:12:24.386 "w_mbytes_per_sec": 0 00:12:24.386 }, 00:12:24.386 "claimed": false, 00:12:24.386 "zoned": false, 00:12:24.386 "supported_io_types": { 00:12:24.386 "read": true, 00:12:24.386 "write": true, 00:12:24.386 "unmap": true, 00:12:24.386 "flush": true, 00:12:24.386 "reset": true, 00:12:24.386 "nvme_admin": false, 00:12:24.386 "nvme_io": false, 00:12:24.386 "nvme_io_md": false, 00:12:24.386 "write_zeroes": true, 00:12:24.386 "zcopy": false, 00:12:24.386 "get_zone_info": false, 00:12:24.386 "zone_management": false, 00:12:24.386 "zone_append": false, 00:12:24.386 "compare": false, 00:12:24.386 "compare_and_write": false, 00:12:24.386 "abort": false, 00:12:24.386 "seek_hole": false, 00:12:24.386 "seek_data": false, 00:12:24.386 "copy": false, 00:12:24.386 "nvme_iov_md": false 00:12:24.386 }, 00:12:24.386 "memory_domains": [ 00:12:24.386 { 00:12:24.386 "dma_device_id": "system", 00:12:24.386 "dma_device_type": 1 00:12:24.386 }, 00:12:24.386 { 00:12:24.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.386 "dma_device_type": 2 00:12:24.386 }, 00:12:24.386 { 00:12:24.386 "dma_device_id": "system", 00:12:24.386 "dma_device_type": 1 00:12:24.386 }, 00:12:24.386 { 00:12:24.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.386 "dma_device_type": 2 00:12:24.386 }, 00:12:24.386 { 00:12:24.386 "dma_device_id": "system", 00:12:24.386 "dma_device_type": 1 00:12:24.386 }, 00:12:24.386 { 00:12:24.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.386 "dma_device_type": 2 00:12:24.386 }, 00:12:24.386 { 00:12:24.386 "dma_device_id": "system", 00:12:24.386 "dma_device_type": 1 00:12:24.386 }, 00:12:24.386 { 00:12:24.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.386 "dma_device_type": 2 00:12:24.386 } 00:12:24.386 ], 00:12:24.386 "driver_specific": { 00:12:24.386 "raid": { 00:12:24.386 "uuid": "d7bb70af-6f94-48af-b5f9-6f05fa333c96", 00:12:24.386 "strip_size_kb": 64, 00:12:24.386 "state": "online", 00:12:24.386 "raid_level": "concat", 00:12:24.386 "superblock": false, 00:12:24.386 "num_base_bdevs": 4, 00:12:24.386 "num_base_bdevs_discovered": 4, 00:12:24.386 "num_base_bdevs_operational": 4, 00:12:24.386 "base_bdevs_list": [ 00:12:24.386 { 00:12:24.386 "name": "BaseBdev1", 00:12:24.386 "uuid": "e4f255c3-fc56-4e58-8719-4bb9979fcc97", 00:12:24.386 "is_configured": true, 00:12:24.386 "data_offset": 0, 00:12:24.386 "data_size": 65536 00:12:24.386 }, 00:12:24.386 { 00:12:24.386 "name": "BaseBdev2", 00:12:24.386 "uuid": "7673bc29-1c29-4afc-b6df-6d1fee69f6bf", 00:12:24.386 "is_configured": true, 00:12:24.386 "data_offset": 0, 00:12:24.386 "data_size": 65536 00:12:24.386 }, 00:12:24.386 { 00:12:24.386 "name": "BaseBdev3", 00:12:24.386 "uuid": "570f9a97-b582-4323-80c1-81c34b329c9f", 00:12:24.386 "is_configured": true, 00:12:24.386 "data_offset": 0, 00:12:24.386 "data_size": 65536 00:12:24.386 }, 00:12:24.386 { 00:12:24.386 "name": "BaseBdev4", 00:12:24.386 "uuid": "18d8cc59-aced-450f-af72-e5fc294907ea", 00:12:24.386 "is_configured": true, 00:12:24.386 "data_offset": 0, 00:12:24.386 "data_size": 65536 00:12:24.386 } 00:12:24.386 ] 00:12:24.386 } 00:12:24.386 } 00:12:24.386 }' 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:24.646 BaseBdev2 00:12:24.646 BaseBdev3 00:12:24.646 BaseBdev4' 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.646 11:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.646 [2024-11-20 11:22:07.712804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:24.646 [2024-11-20 11:22:07.712846] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:24.646 [2024-11-20 11:22:07.712907] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.907 11:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.907 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:24.907 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:24.907 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:24.907 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:24.907 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:24.907 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:24.907 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.907 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:24.907 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:24.907 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.907 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:24.907 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.907 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.907 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.907 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.907 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.907 11:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.907 11:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.907 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.907 11:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.907 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.907 "name": "Existed_Raid", 00:12:24.907 "uuid": "d7bb70af-6f94-48af-b5f9-6f05fa333c96", 00:12:24.907 "strip_size_kb": 64, 00:12:24.907 "state": "offline", 00:12:24.907 "raid_level": "concat", 00:12:24.907 "superblock": false, 00:12:24.907 "num_base_bdevs": 4, 00:12:24.907 "num_base_bdevs_discovered": 3, 00:12:24.907 "num_base_bdevs_operational": 3, 00:12:24.907 "base_bdevs_list": [ 00:12:24.907 { 00:12:24.907 "name": null, 00:12:24.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.907 "is_configured": false, 00:12:24.907 "data_offset": 0, 00:12:24.907 "data_size": 65536 00:12:24.907 }, 00:12:24.907 { 00:12:24.907 "name": "BaseBdev2", 00:12:24.907 "uuid": "7673bc29-1c29-4afc-b6df-6d1fee69f6bf", 00:12:24.907 "is_configured": true, 00:12:24.907 "data_offset": 0, 00:12:24.907 "data_size": 65536 00:12:24.907 }, 00:12:24.907 { 00:12:24.907 "name": "BaseBdev3", 00:12:24.907 "uuid": "570f9a97-b582-4323-80c1-81c34b329c9f", 00:12:24.907 "is_configured": true, 00:12:24.907 "data_offset": 0, 00:12:24.907 "data_size": 65536 00:12:24.907 }, 00:12:24.907 { 00:12:24.907 "name": "BaseBdev4", 00:12:24.907 "uuid": "18d8cc59-aced-450f-af72-e5fc294907ea", 00:12:24.907 "is_configured": true, 00:12:24.907 "data_offset": 0, 00:12:24.907 "data_size": 65536 00:12:24.907 } 00:12:24.907 ] 00:12:24.907 }' 00:12:24.907 11:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.907 11:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.167 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:25.167 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:25.167 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.167 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:25.167 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.167 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.426 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.426 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:25.426 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:25.426 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:25.426 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.426 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.426 [2024-11-20 11:22:08.312051] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:25.426 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.426 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:25.426 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:25.426 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:25.426 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.426 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.426 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.426 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.426 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:25.426 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:25.426 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:25.426 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.426 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.426 [2024-11-20 11:22:08.461184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.685 [2024-11-20 11:22:08.623594] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:25.685 [2024-11-20 11:22:08.623654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.685 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.946 BaseBdev2 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.946 [ 00:12:25.946 { 00:12:25.946 "name": "BaseBdev2", 00:12:25.946 "aliases": [ 00:12:25.946 "b0c8386e-e8e3-4817-8b36-2a44ed263cd8" 00:12:25.946 ], 00:12:25.946 "product_name": "Malloc disk", 00:12:25.946 "block_size": 512, 00:12:25.946 "num_blocks": 65536, 00:12:25.946 "uuid": "b0c8386e-e8e3-4817-8b36-2a44ed263cd8", 00:12:25.946 "assigned_rate_limits": { 00:12:25.946 "rw_ios_per_sec": 0, 00:12:25.946 "rw_mbytes_per_sec": 0, 00:12:25.946 "r_mbytes_per_sec": 0, 00:12:25.946 "w_mbytes_per_sec": 0 00:12:25.946 }, 00:12:25.946 "claimed": false, 00:12:25.946 "zoned": false, 00:12:25.946 "supported_io_types": { 00:12:25.946 "read": true, 00:12:25.946 "write": true, 00:12:25.946 "unmap": true, 00:12:25.946 "flush": true, 00:12:25.946 "reset": true, 00:12:25.946 "nvme_admin": false, 00:12:25.946 "nvme_io": false, 00:12:25.946 "nvme_io_md": false, 00:12:25.946 "write_zeroes": true, 00:12:25.946 "zcopy": true, 00:12:25.946 "get_zone_info": false, 00:12:25.946 "zone_management": false, 00:12:25.946 "zone_append": false, 00:12:25.946 "compare": false, 00:12:25.946 "compare_and_write": false, 00:12:25.946 "abort": true, 00:12:25.946 "seek_hole": false, 00:12:25.946 "seek_data": false, 00:12:25.946 "copy": true, 00:12:25.946 "nvme_iov_md": false 00:12:25.946 }, 00:12:25.946 "memory_domains": [ 00:12:25.946 { 00:12:25.946 "dma_device_id": "system", 00:12:25.946 "dma_device_type": 1 00:12:25.946 }, 00:12:25.946 { 00:12:25.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.946 "dma_device_type": 2 00:12:25.946 } 00:12:25.946 ], 00:12:25.946 "driver_specific": {} 00:12:25.946 } 00:12:25.946 ] 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.946 BaseBdev3 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.946 [ 00:12:25.946 { 00:12:25.946 "name": "BaseBdev3", 00:12:25.946 "aliases": [ 00:12:25.946 "79abe75d-60b8-4f7e-9242-46bf556b8d59" 00:12:25.946 ], 00:12:25.946 "product_name": "Malloc disk", 00:12:25.946 "block_size": 512, 00:12:25.946 "num_blocks": 65536, 00:12:25.946 "uuid": "79abe75d-60b8-4f7e-9242-46bf556b8d59", 00:12:25.946 "assigned_rate_limits": { 00:12:25.946 "rw_ios_per_sec": 0, 00:12:25.946 "rw_mbytes_per_sec": 0, 00:12:25.946 "r_mbytes_per_sec": 0, 00:12:25.946 "w_mbytes_per_sec": 0 00:12:25.946 }, 00:12:25.946 "claimed": false, 00:12:25.946 "zoned": false, 00:12:25.946 "supported_io_types": { 00:12:25.946 "read": true, 00:12:25.946 "write": true, 00:12:25.946 "unmap": true, 00:12:25.946 "flush": true, 00:12:25.946 "reset": true, 00:12:25.946 "nvme_admin": false, 00:12:25.946 "nvme_io": false, 00:12:25.946 "nvme_io_md": false, 00:12:25.946 "write_zeroes": true, 00:12:25.946 "zcopy": true, 00:12:25.946 "get_zone_info": false, 00:12:25.946 "zone_management": false, 00:12:25.946 "zone_append": false, 00:12:25.946 "compare": false, 00:12:25.946 "compare_and_write": false, 00:12:25.946 "abort": true, 00:12:25.946 "seek_hole": false, 00:12:25.946 "seek_data": false, 00:12:25.946 "copy": true, 00:12:25.946 "nvme_iov_md": false 00:12:25.946 }, 00:12:25.946 "memory_domains": [ 00:12:25.946 { 00:12:25.946 "dma_device_id": "system", 00:12:25.946 "dma_device_type": 1 00:12:25.946 }, 00:12:25.946 { 00:12:25.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.946 "dma_device_type": 2 00:12:25.946 } 00:12:25.946 ], 00:12:25.946 "driver_specific": {} 00:12:25.946 } 00:12:25.946 ] 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.946 BaseBdev4 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.946 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:25.947 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.947 11:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.947 [ 00:12:25.947 { 00:12:25.947 "name": "BaseBdev4", 00:12:25.947 "aliases": [ 00:12:25.947 "6d2edb46-ac86-4f04-934d-7a6c0f307650" 00:12:25.947 ], 00:12:25.947 "product_name": "Malloc disk", 00:12:25.947 "block_size": 512, 00:12:25.947 "num_blocks": 65536, 00:12:25.947 "uuid": "6d2edb46-ac86-4f04-934d-7a6c0f307650", 00:12:25.947 "assigned_rate_limits": { 00:12:25.947 "rw_ios_per_sec": 0, 00:12:25.947 "rw_mbytes_per_sec": 0, 00:12:25.947 "r_mbytes_per_sec": 0, 00:12:25.947 "w_mbytes_per_sec": 0 00:12:25.947 }, 00:12:25.947 "claimed": false, 00:12:25.947 "zoned": false, 00:12:25.947 "supported_io_types": { 00:12:25.947 "read": true, 00:12:25.947 "write": true, 00:12:25.947 "unmap": true, 00:12:25.947 "flush": true, 00:12:25.947 "reset": true, 00:12:25.947 "nvme_admin": false, 00:12:25.947 "nvme_io": false, 00:12:25.947 "nvme_io_md": false, 00:12:25.947 "write_zeroes": true, 00:12:25.947 "zcopy": true, 00:12:25.947 "get_zone_info": false, 00:12:25.947 "zone_management": false, 00:12:25.947 "zone_append": false, 00:12:25.947 "compare": false, 00:12:25.947 "compare_and_write": false, 00:12:25.947 "abort": true, 00:12:25.947 "seek_hole": false, 00:12:25.947 "seek_data": false, 00:12:25.947 "copy": true, 00:12:25.947 "nvme_iov_md": false 00:12:25.947 }, 00:12:25.947 "memory_domains": [ 00:12:25.947 { 00:12:25.947 "dma_device_id": "system", 00:12:25.947 "dma_device_type": 1 00:12:25.947 }, 00:12:25.947 { 00:12:25.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.947 "dma_device_type": 2 00:12:25.947 } 00:12:25.947 ], 00:12:25.947 "driver_specific": {} 00:12:25.947 } 00:12:25.947 ] 00:12:25.947 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.947 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:25.947 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:25.947 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:25.947 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:25.947 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.947 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.947 [2024-11-20 11:22:09.022856] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:25.947 [2024-11-20 11:22:09.022968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:25.947 [2024-11-20 11:22:09.023046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:25.947 [2024-11-20 11:22:09.025085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:25.947 [2024-11-20 11:22:09.025203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:25.947 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.947 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:25.947 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.947 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.947 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:25.947 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.947 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.947 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.947 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.947 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.947 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.947 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.947 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.947 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.947 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.947 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.211 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.211 "name": "Existed_Raid", 00:12:26.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.211 "strip_size_kb": 64, 00:12:26.211 "state": "configuring", 00:12:26.211 "raid_level": "concat", 00:12:26.211 "superblock": false, 00:12:26.211 "num_base_bdevs": 4, 00:12:26.211 "num_base_bdevs_discovered": 3, 00:12:26.211 "num_base_bdevs_operational": 4, 00:12:26.211 "base_bdevs_list": [ 00:12:26.211 { 00:12:26.211 "name": "BaseBdev1", 00:12:26.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.211 "is_configured": false, 00:12:26.211 "data_offset": 0, 00:12:26.211 "data_size": 0 00:12:26.211 }, 00:12:26.211 { 00:12:26.212 "name": "BaseBdev2", 00:12:26.212 "uuid": "b0c8386e-e8e3-4817-8b36-2a44ed263cd8", 00:12:26.212 "is_configured": true, 00:12:26.212 "data_offset": 0, 00:12:26.212 "data_size": 65536 00:12:26.212 }, 00:12:26.212 { 00:12:26.212 "name": "BaseBdev3", 00:12:26.212 "uuid": "79abe75d-60b8-4f7e-9242-46bf556b8d59", 00:12:26.212 "is_configured": true, 00:12:26.212 "data_offset": 0, 00:12:26.212 "data_size": 65536 00:12:26.212 }, 00:12:26.212 { 00:12:26.212 "name": "BaseBdev4", 00:12:26.212 "uuid": "6d2edb46-ac86-4f04-934d-7a6c0f307650", 00:12:26.212 "is_configured": true, 00:12:26.212 "data_offset": 0, 00:12:26.212 "data_size": 65536 00:12:26.212 } 00:12:26.212 ] 00:12:26.212 }' 00:12:26.212 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.212 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.473 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:26.473 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.473 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.473 [2024-11-20 11:22:09.474080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:26.473 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.473 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:26.473 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.473 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.473 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:26.473 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.473 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.473 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.473 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.473 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.473 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.473 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.473 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.473 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.473 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.473 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.473 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.473 "name": "Existed_Raid", 00:12:26.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.473 "strip_size_kb": 64, 00:12:26.473 "state": "configuring", 00:12:26.473 "raid_level": "concat", 00:12:26.473 "superblock": false, 00:12:26.473 "num_base_bdevs": 4, 00:12:26.473 "num_base_bdevs_discovered": 2, 00:12:26.473 "num_base_bdevs_operational": 4, 00:12:26.473 "base_bdevs_list": [ 00:12:26.473 { 00:12:26.473 "name": "BaseBdev1", 00:12:26.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.473 "is_configured": false, 00:12:26.473 "data_offset": 0, 00:12:26.473 "data_size": 0 00:12:26.473 }, 00:12:26.473 { 00:12:26.473 "name": null, 00:12:26.473 "uuid": "b0c8386e-e8e3-4817-8b36-2a44ed263cd8", 00:12:26.473 "is_configured": false, 00:12:26.473 "data_offset": 0, 00:12:26.473 "data_size": 65536 00:12:26.473 }, 00:12:26.473 { 00:12:26.473 "name": "BaseBdev3", 00:12:26.473 "uuid": "79abe75d-60b8-4f7e-9242-46bf556b8d59", 00:12:26.473 "is_configured": true, 00:12:26.473 "data_offset": 0, 00:12:26.473 "data_size": 65536 00:12:26.473 }, 00:12:26.473 { 00:12:26.473 "name": "BaseBdev4", 00:12:26.473 "uuid": "6d2edb46-ac86-4f04-934d-7a6c0f307650", 00:12:26.473 "is_configured": true, 00:12:26.473 "data_offset": 0, 00:12:26.473 "data_size": 65536 00:12:26.473 } 00:12:26.473 ] 00:12:26.473 }' 00:12:26.473 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.473 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.044 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.044 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.044 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.044 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:27.044 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.044 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:27.044 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:27.044 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.044 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.044 [2024-11-20 11:22:09.996243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:27.044 BaseBdev1 00:12:27.044 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.044 11:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:27.044 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:27.044 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:27.044 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:27.044 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:27.044 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:27.044 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:27.044 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.044 11:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.044 11:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.044 11:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:27.044 11:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.044 11:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.044 [ 00:12:27.044 { 00:12:27.044 "name": "BaseBdev1", 00:12:27.044 "aliases": [ 00:12:27.044 "cae93408-0660-423c-8a59-4f6f55bd6263" 00:12:27.044 ], 00:12:27.044 "product_name": "Malloc disk", 00:12:27.044 "block_size": 512, 00:12:27.044 "num_blocks": 65536, 00:12:27.044 "uuid": "cae93408-0660-423c-8a59-4f6f55bd6263", 00:12:27.044 "assigned_rate_limits": { 00:12:27.044 "rw_ios_per_sec": 0, 00:12:27.044 "rw_mbytes_per_sec": 0, 00:12:27.044 "r_mbytes_per_sec": 0, 00:12:27.044 "w_mbytes_per_sec": 0 00:12:27.044 }, 00:12:27.044 "claimed": true, 00:12:27.044 "claim_type": "exclusive_write", 00:12:27.044 "zoned": false, 00:12:27.044 "supported_io_types": { 00:12:27.044 "read": true, 00:12:27.044 "write": true, 00:12:27.044 "unmap": true, 00:12:27.044 "flush": true, 00:12:27.044 "reset": true, 00:12:27.044 "nvme_admin": false, 00:12:27.044 "nvme_io": false, 00:12:27.044 "nvme_io_md": false, 00:12:27.044 "write_zeroes": true, 00:12:27.044 "zcopy": true, 00:12:27.044 "get_zone_info": false, 00:12:27.044 "zone_management": false, 00:12:27.044 "zone_append": false, 00:12:27.044 "compare": false, 00:12:27.044 "compare_and_write": false, 00:12:27.044 "abort": true, 00:12:27.044 "seek_hole": false, 00:12:27.044 "seek_data": false, 00:12:27.044 "copy": true, 00:12:27.044 "nvme_iov_md": false 00:12:27.044 }, 00:12:27.044 "memory_domains": [ 00:12:27.044 { 00:12:27.044 "dma_device_id": "system", 00:12:27.044 "dma_device_type": 1 00:12:27.044 }, 00:12:27.044 { 00:12:27.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.044 "dma_device_type": 2 00:12:27.044 } 00:12:27.044 ], 00:12:27.044 "driver_specific": {} 00:12:27.044 } 00:12:27.044 ] 00:12:27.044 11:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.044 11:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:27.044 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:27.044 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.044 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.044 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:27.044 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.044 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.044 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.044 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.044 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.044 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.044 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.044 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.044 11:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.044 11:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.044 11:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.044 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.044 "name": "Existed_Raid", 00:12:27.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.044 "strip_size_kb": 64, 00:12:27.044 "state": "configuring", 00:12:27.044 "raid_level": "concat", 00:12:27.044 "superblock": false, 00:12:27.044 "num_base_bdevs": 4, 00:12:27.044 "num_base_bdevs_discovered": 3, 00:12:27.044 "num_base_bdevs_operational": 4, 00:12:27.044 "base_bdevs_list": [ 00:12:27.044 { 00:12:27.044 "name": "BaseBdev1", 00:12:27.044 "uuid": "cae93408-0660-423c-8a59-4f6f55bd6263", 00:12:27.044 "is_configured": true, 00:12:27.044 "data_offset": 0, 00:12:27.044 "data_size": 65536 00:12:27.044 }, 00:12:27.044 { 00:12:27.044 "name": null, 00:12:27.044 "uuid": "b0c8386e-e8e3-4817-8b36-2a44ed263cd8", 00:12:27.044 "is_configured": false, 00:12:27.044 "data_offset": 0, 00:12:27.044 "data_size": 65536 00:12:27.044 }, 00:12:27.044 { 00:12:27.044 "name": "BaseBdev3", 00:12:27.044 "uuid": "79abe75d-60b8-4f7e-9242-46bf556b8d59", 00:12:27.044 "is_configured": true, 00:12:27.045 "data_offset": 0, 00:12:27.045 "data_size": 65536 00:12:27.045 }, 00:12:27.045 { 00:12:27.045 "name": "BaseBdev4", 00:12:27.045 "uuid": "6d2edb46-ac86-4f04-934d-7a6c0f307650", 00:12:27.045 "is_configured": true, 00:12:27.045 "data_offset": 0, 00:12:27.045 "data_size": 65536 00:12:27.045 } 00:12:27.045 ] 00:12:27.045 }' 00:12:27.045 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.045 11:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.641 [2024-11-20 11:22:10.527476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.641 "name": "Existed_Raid", 00:12:27.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.641 "strip_size_kb": 64, 00:12:27.641 "state": "configuring", 00:12:27.641 "raid_level": "concat", 00:12:27.641 "superblock": false, 00:12:27.641 "num_base_bdevs": 4, 00:12:27.641 "num_base_bdevs_discovered": 2, 00:12:27.641 "num_base_bdevs_operational": 4, 00:12:27.641 "base_bdevs_list": [ 00:12:27.641 { 00:12:27.641 "name": "BaseBdev1", 00:12:27.641 "uuid": "cae93408-0660-423c-8a59-4f6f55bd6263", 00:12:27.641 "is_configured": true, 00:12:27.641 "data_offset": 0, 00:12:27.641 "data_size": 65536 00:12:27.641 }, 00:12:27.641 { 00:12:27.641 "name": null, 00:12:27.641 "uuid": "b0c8386e-e8e3-4817-8b36-2a44ed263cd8", 00:12:27.641 "is_configured": false, 00:12:27.641 "data_offset": 0, 00:12:27.641 "data_size": 65536 00:12:27.641 }, 00:12:27.641 { 00:12:27.641 "name": null, 00:12:27.641 "uuid": "79abe75d-60b8-4f7e-9242-46bf556b8d59", 00:12:27.641 "is_configured": false, 00:12:27.641 "data_offset": 0, 00:12:27.641 "data_size": 65536 00:12:27.641 }, 00:12:27.641 { 00:12:27.641 "name": "BaseBdev4", 00:12:27.641 "uuid": "6d2edb46-ac86-4f04-934d-7a6c0f307650", 00:12:27.641 "is_configured": true, 00:12:27.641 "data_offset": 0, 00:12:27.641 "data_size": 65536 00:12:27.641 } 00:12:27.641 ] 00:12:27.641 }' 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.641 11:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.900 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.901 11:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.901 11:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.901 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:27.901 11:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.901 11:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:27.901 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:27.901 11:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.901 11:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.901 [2024-11-20 11:22:11.010664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:28.164 11:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.164 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:28.164 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.164 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.165 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:28.165 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.165 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.165 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.165 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.165 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.165 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.165 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.165 11:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.165 11:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.165 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.165 11:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.165 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.165 "name": "Existed_Raid", 00:12:28.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.166 "strip_size_kb": 64, 00:12:28.166 "state": "configuring", 00:12:28.166 "raid_level": "concat", 00:12:28.166 "superblock": false, 00:12:28.166 "num_base_bdevs": 4, 00:12:28.166 "num_base_bdevs_discovered": 3, 00:12:28.166 "num_base_bdevs_operational": 4, 00:12:28.166 "base_bdevs_list": [ 00:12:28.166 { 00:12:28.166 "name": "BaseBdev1", 00:12:28.166 "uuid": "cae93408-0660-423c-8a59-4f6f55bd6263", 00:12:28.166 "is_configured": true, 00:12:28.166 "data_offset": 0, 00:12:28.166 "data_size": 65536 00:12:28.166 }, 00:12:28.166 { 00:12:28.166 "name": null, 00:12:28.166 "uuid": "b0c8386e-e8e3-4817-8b36-2a44ed263cd8", 00:12:28.166 "is_configured": false, 00:12:28.166 "data_offset": 0, 00:12:28.166 "data_size": 65536 00:12:28.166 }, 00:12:28.166 { 00:12:28.166 "name": "BaseBdev3", 00:12:28.166 "uuid": "79abe75d-60b8-4f7e-9242-46bf556b8d59", 00:12:28.166 "is_configured": true, 00:12:28.166 "data_offset": 0, 00:12:28.166 "data_size": 65536 00:12:28.166 }, 00:12:28.166 { 00:12:28.166 "name": "BaseBdev4", 00:12:28.166 "uuid": "6d2edb46-ac86-4f04-934d-7a6c0f307650", 00:12:28.166 "is_configured": true, 00:12:28.166 "data_offset": 0, 00:12:28.166 "data_size": 65536 00:12:28.166 } 00:12:28.166 ] 00:12:28.166 }' 00:12:28.166 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.166 11:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.429 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:28.429 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.429 11:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.429 11:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.429 11:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.429 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:28.429 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:28.429 11:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.429 11:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.429 [2024-11-20 11:22:11.481879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:28.688 11:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.688 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:28.688 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.688 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.688 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:28.688 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.688 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.688 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.688 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.688 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.688 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.688 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.688 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.688 11:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.688 11:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.688 11:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.688 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.688 "name": "Existed_Raid", 00:12:28.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.688 "strip_size_kb": 64, 00:12:28.688 "state": "configuring", 00:12:28.688 "raid_level": "concat", 00:12:28.688 "superblock": false, 00:12:28.688 "num_base_bdevs": 4, 00:12:28.688 "num_base_bdevs_discovered": 2, 00:12:28.688 "num_base_bdevs_operational": 4, 00:12:28.688 "base_bdevs_list": [ 00:12:28.688 { 00:12:28.688 "name": null, 00:12:28.688 "uuid": "cae93408-0660-423c-8a59-4f6f55bd6263", 00:12:28.688 "is_configured": false, 00:12:28.688 "data_offset": 0, 00:12:28.688 "data_size": 65536 00:12:28.688 }, 00:12:28.688 { 00:12:28.688 "name": null, 00:12:28.688 "uuid": "b0c8386e-e8e3-4817-8b36-2a44ed263cd8", 00:12:28.688 "is_configured": false, 00:12:28.688 "data_offset": 0, 00:12:28.688 "data_size": 65536 00:12:28.688 }, 00:12:28.688 { 00:12:28.688 "name": "BaseBdev3", 00:12:28.688 "uuid": "79abe75d-60b8-4f7e-9242-46bf556b8d59", 00:12:28.688 "is_configured": true, 00:12:28.688 "data_offset": 0, 00:12:28.688 "data_size": 65536 00:12:28.688 }, 00:12:28.688 { 00:12:28.688 "name": "BaseBdev4", 00:12:28.688 "uuid": "6d2edb46-ac86-4f04-934d-7a6c0f307650", 00:12:28.688 "is_configured": true, 00:12:28.688 "data_offset": 0, 00:12:28.688 "data_size": 65536 00:12:28.688 } 00:12:28.688 ] 00:12:28.688 }' 00:12:28.689 11:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.689 11:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.948 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.948 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:28.948 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.948 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.948 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.208 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:29.208 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:29.208 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.208 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.208 [2024-11-20 11:22:12.071684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:29.208 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.208 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:29.208 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.208 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.208 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:29.208 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.208 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.208 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.208 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.208 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.208 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.208 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.208 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.208 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.208 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.208 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.208 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.208 "name": "Existed_Raid", 00:12:29.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.208 "strip_size_kb": 64, 00:12:29.208 "state": "configuring", 00:12:29.208 "raid_level": "concat", 00:12:29.208 "superblock": false, 00:12:29.208 "num_base_bdevs": 4, 00:12:29.208 "num_base_bdevs_discovered": 3, 00:12:29.208 "num_base_bdevs_operational": 4, 00:12:29.208 "base_bdevs_list": [ 00:12:29.208 { 00:12:29.208 "name": null, 00:12:29.208 "uuid": "cae93408-0660-423c-8a59-4f6f55bd6263", 00:12:29.208 "is_configured": false, 00:12:29.208 "data_offset": 0, 00:12:29.208 "data_size": 65536 00:12:29.208 }, 00:12:29.208 { 00:12:29.208 "name": "BaseBdev2", 00:12:29.208 "uuid": "b0c8386e-e8e3-4817-8b36-2a44ed263cd8", 00:12:29.208 "is_configured": true, 00:12:29.208 "data_offset": 0, 00:12:29.208 "data_size": 65536 00:12:29.208 }, 00:12:29.208 { 00:12:29.208 "name": "BaseBdev3", 00:12:29.208 "uuid": "79abe75d-60b8-4f7e-9242-46bf556b8d59", 00:12:29.208 "is_configured": true, 00:12:29.208 "data_offset": 0, 00:12:29.208 "data_size": 65536 00:12:29.208 }, 00:12:29.208 { 00:12:29.208 "name": "BaseBdev4", 00:12:29.208 "uuid": "6d2edb46-ac86-4f04-934d-7a6c0f307650", 00:12:29.208 "is_configured": true, 00:12:29.208 "data_offset": 0, 00:12:29.208 "data_size": 65536 00:12:29.208 } 00:12:29.208 ] 00:12:29.208 }' 00:12:29.208 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.208 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.467 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.467 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:29.467 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.467 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.467 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.804 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:29.804 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.804 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.804 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.804 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:29.804 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.804 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cae93408-0660-423c-8a59-4f6f55bd6263 00:12:29.804 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.804 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.804 [2024-11-20 11:22:12.671497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:29.804 [2024-11-20 11:22:12.671656] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:29.804 [2024-11-20 11:22:12.671667] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:29.804 [2024-11-20 11:22:12.671955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:29.804 [2024-11-20 11:22:12.672110] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:29.804 [2024-11-20 11:22:12.672123] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:29.804 [2024-11-20 11:22:12.672394] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.804 NewBaseBdev 00:12:29.804 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.804 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:29.804 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:29.804 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:29.804 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:29.804 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:29.804 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:29.804 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:29.804 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.804 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.804 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.804 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:29.805 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.805 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.805 [ 00:12:29.805 { 00:12:29.805 "name": "NewBaseBdev", 00:12:29.805 "aliases": [ 00:12:29.805 "cae93408-0660-423c-8a59-4f6f55bd6263" 00:12:29.805 ], 00:12:29.805 "product_name": "Malloc disk", 00:12:29.805 "block_size": 512, 00:12:29.805 "num_blocks": 65536, 00:12:29.805 "uuid": "cae93408-0660-423c-8a59-4f6f55bd6263", 00:12:29.805 "assigned_rate_limits": { 00:12:29.805 "rw_ios_per_sec": 0, 00:12:29.805 "rw_mbytes_per_sec": 0, 00:12:29.805 "r_mbytes_per_sec": 0, 00:12:29.805 "w_mbytes_per_sec": 0 00:12:29.805 }, 00:12:29.805 "claimed": true, 00:12:29.805 "claim_type": "exclusive_write", 00:12:29.805 "zoned": false, 00:12:29.805 "supported_io_types": { 00:12:29.805 "read": true, 00:12:29.805 "write": true, 00:12:29.805 "unmap": true, 00:12:29.805 "flush": true, 00:12:29.805 "reset": true, 00:12:29.805 "nvme_admin": false, 00:12:29.805 "nvme_io": false, 00:12:29.805 "nvme_io_md": false, 00:12:29.805 "write_zeroes": true, 00:12:29.805 "zcopy": true, 00:12:29.805 "get_zone_info": false, 00:12:29.805 "zone_management": false, 00:12:29.805 "zone_append": false, 00:12:29.805 "compare": false, 00:12:29.805 "compare_and_write": false, 00:12:29.805 "abort": true, 00:12:29.805 "seek_hole": false, 00:12:29.805 "seek_data": false, 00:12:29.805 "copy": true, 00:12:29.805 "nvme_iov_md": false 00:12:29.805 }, 00:12:29.805 "memory_domains": [ 00:12:29.805 { 00:12:29.805 "dma_device_id": "system", 00:12:29.805 "dma_device_type": 1 00:12:29.805 }, 00:12:29.805 { 00:12:29.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.805 "dma_device_type": 2 00:12:29.805 } 00:12:29.805 ], 00:12:29.805 "driver_specific": {} 00:12:29.805 } 00:12:29.805 ] 00:12:29.805 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.805 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:29.805 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:29.805 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.805 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.805 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:29.805 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.805 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.805 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.805 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.805 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.805 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.805 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.805 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.805 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.805 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.805 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.805 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.805 "name": "Existed_Raid", 00:12:29.805 "uuid": "78f295be-d20c-4cd4-8568-31bfef33d59d", 00:12:29.805 "strip_size_kb": 64, 00:12:29.805 "state": "online", 00:12:29.805 "raid_level": "concat", 00:12:29.805 "superblock": false, 00:12:29.805 "num_base_bdevs": 4, 00:12:29.805 "num_base_bdevs_discovered": 4, 00:12:29.805 "num_base_bdevs_operational": 4, 00:12:29.805 "base_bdevs_list": [ 00:12:29.805 { 00:12:29.805 "name": "NewBaseBdev", 00:12:29.805 "uuid": "cae93408-0660-423c-8a59-4f6f55bd6263", 00:12:29.805 "is_configured": true, 00:12:29.805 "data_offset": 0, 00:12:29.805 "data_size": 65536 00:12:29.805 }, 00:12:29.805 { 00:12:29.805 "name": "BaseBdev2", 00:12:29.805 "uuid": "b0c8386e-e8e3-4817-8b36-2a44ed263cd8", 00:12:29.805 "is_configured": true, 00:12:29.805 "data_offset": 0, 00:12:29.805 "data_size": 65536 00:12:29.805 }, 00:12:29.805 { 00:12:29.805 "name": "BaseBdev3", 00:12:29.805 "uuid": "79abe75d-60b8-4f7e-9242-46bf556b8d59", 00:12:29.805 "is_configured": true, 00:12:29.805 "data_offset": 0, 00:12:29.805 "data_size": 65536 00:12:29.805 }, 00:12:29.805 { 00:12:29.805 "name": "BaseBdev4", 00:12:29.805 "uuid": "6d2edb46-ac86-4f04-934d-7a6c0f307650", 00:12:29.805 "is_configured": true, 00:12:29.805 "data_offset": 0, 00:12:29.805 "data_size": 65536 00:12:29.805 } 00:12:29.805 ] 00:12:29.805 }' 00:12:29.805 11:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.805 11:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.065 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:30.065 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:30.065 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:30.065 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:30.065 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:30.065 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:30.065 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:30.065 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.065 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.065 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:30.065 [2024-11-20 11:22:13.179054] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:30.325 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.325 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:30.325 "name": "Existed_Raid", 00:12:30.325 "aliases": [ 00:12:30.325 "78f295be-d20c-4cd4-8568-31bfef33d59d" 00:12:30.325 ], 00:12:30.325 "product_name": "Raid Volume", 00:12:30.325 "block_size": 512, 00:12:30.325 "num_blocks": 262144, 00:12:30.325 "uuid": "78f295be-d20c-4cd4-8568-31bfef33d59d", 00:12:30.325 "assigned_rate_limits": { 00:12:30.325 "rw_ios_per_sec": 0, 00:12:30.325 "rw_mbytes_per_sec": 0, 00:12:30.325 "r_mbytes_per_sec": 0, 00:12:30.325 "w_mbytes_per_sec": 0 00:12:30.325 }, 00:12:30.325 "claimed": false, 00:12:30.325 "zoned": false, 00:12:30.325 "supported_io_types": { 00:12:30.325 "read": true, 00:12:30.325 "write": true, 00:12:30.325 "unmap": true, 00:12:30.325 "flush": true, 00:12:30.325 "reset": true, 00:12:30.325 "nvme_admin": false, 00:12:30.325 "nvme_io": false, 00:12:30.325 "nvme_io_md": false, 00:12:30.325 "write_zeroes": true, 00:12:30.325 "zcopy": false, 00:12:30.325 "get_zone_info": false, 00:12:30.325 "zone_management": false, 00:12:30.325 "zone_append": false, 00:12:30.325 "compare": false, 00:12:30.325 "compare_and_write": false, 00:12:30.325 "abort": false, 00:12:30.325 "seek_hole": false, 00:12:30.325 "seek_data": false, 00:12:30.325 "copy": false, 00:12:30.325 "nvme_iov_md": false 00:12:30.325 }, 00:12:30.325 "memory_domains": [ 00:12:30.325 { 00:12:30.325 "dma_device_id": "system", 00:12:30.325 "dma_device_type": 1 00:12:30.325 }, 00:12:30.325 { 00:12:30.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.325 "dma_device_type": 2 00:12:30.325 }, 00:12:30.325 { 00:12:30.325 "dma_device_id": "system", 00:12:30.325 "dma_device_type": 1 00:12:30.325 }, 00:12:30.325 { 00:12:30.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.325 "dma_device_type": 2 00:12:30.325 }, 00:12:30.325 { 00:12:30.325 "dma_device_id": "system", 00:12:30.325 "dma_device_type": 1 00:12:30.325 }, 00:12:30.325 { 00:12:30.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.325 "dma_device_type": 2 00:12:30.325 }, 00:12:30.325 { 00:12:30.325 "dma_device_id": "system", 00:12:30.325 "dma_device_type": 1 00:12:30.325 }, 00:12:30.325 { 00:12:30.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.325 "dma_device_type": 2 00:12:30.325 } 00:12:30.325 ], 00:12:30.325 "driver_specific": { 00:12:30.325 "raid": { 00:12:30.325 "uuid": "78f295be-d20c-4cd4-8568-31bfef33d59d", 00:12:30.325 "strip_size_kb": 64, 00:12:30.325 "state": "online", 00:12:30.325 "raid_level": "concat", 00:12:30.325 "superblock": false, 00:12:30.325 "num_base_bdevs": 4, 00:12:30.325 "num_base_bdevs_discovered": 4, 00:12:30.325 "num_base_bdevs_operational": 4, 00:12:30.325 "base_bdevs_list": [ 00:12:30.325 { 00:12:30.325 "name": "NewBaseBdev", 00:12:30.325 "uuid": "cae93408-0660-423c-8a59-4f6f55bd6263", 00:12:30.325 "is_configured": true, 00:12:30.325 "data_offset": 0, 00:12:30.325 "data_size": 65536 00:12:30.325 }, 00:12:30.325 { 00:12:30.325 "name": "BaseBdev2", 00:12:30.325 "uuid": "b0c8386e-e8e3-4817-8b36-2a44ed263cd8", 00:12:30.325 "is_configured": true, 00:12:30.325 "data_offset": 0, 00:12:30.325 "data_size": 65536 00:12:30.325 }, 00:12:30.325 { 00:12:30.325 "name": "BaseBdev3", 00:12:30.325 "uuid": "79abe75d-60b8-4f7e-9242-46bf556b8d59", 00:12:30.325 "is_configured": true, 00:12:30.325 "data_offset": 0, 00:12:30.325 "data_size": 65536 00:12:30.325 }, 00:12:30.325 { 00:12:30.325 "name": "BaseBdev4", 00:12:30.325 "uuid": "6d2edb46-ac86-4f04-934d-7a6c0f307650", 00:12:30.325 "is_configured": true, 00:12:30.325 "data_offset": 0, 00:12:30.325 "data_size": 65536 00:12:30.325 } 00:12:30.325 ] 00:12:30.325 } 00:12:30.325 } 00:12:30.325 }' 00:12:30.325 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:30.325 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:30.325 BaseBdev2 00:12:30.325 BaseBdev3 00:12:30.325 BaseBdev4' 00:12:30.325 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.325 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:30.325 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.325 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:30.325 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.325 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.325 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.325 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.325 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:30.325 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:30.325 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.325 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.325 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:30.326 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.326 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.326 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.326 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:30.326 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:30.326 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.326 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:30.326 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.326 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.326 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.326 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.586 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:30.586 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:30.586 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.586 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:30.586 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.586 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.586 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.586 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.586 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:30.586 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:30.586 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:30.586 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.586 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.586 [2024-11-20 11:22:13.510130] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:30.586 [2024-11-20 11:22:13.510163] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:30.586 [2024-11-20 11:22:13.510258] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.586 [2024-11-20 11:22:13.510338] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.586 [2024-11-20 11:22:13.510349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:30.586 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.586 11:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71425 00:12:30.586 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71425 ']' 00:12:30.586 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71425 00:12:30.586 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:30.586 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.586 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71425 00:12:30.586 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:30.587 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:30.587 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71425' 00:12:30.587 killing process with pid 71425 00:12:30.587 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71425 00:12:30.587 [2024-11-20 11:22:13.546372] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:30.587 11:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71425 00:12:31.155 [2024-11-20 11:22:13.962305] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:32.092 00:12:32.092 real 0m11.752s 00:12:32.092 user 0m18.573s 00:12:32.092 sys 0m2.051s 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.092 ************************************ 00:12:32.092 END TEST raid_state_function_test 00:12:32.092 ************************************ 00:12:32.092 11:22:15 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:12:32.092 11:22:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:32.092 11:22:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.092 11:22:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:32.092 ************************************ 00:12:32.092 START TEST raid_state_function_test_sb 00:12:32.092 ************************************ 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:32.092 Process raid pid: 72104 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72104 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72104' 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72104 00:12:32.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72104 ']' 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.092 11:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.420 [2024-11-20 11:22:15.259319] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:12:32.420 [2024-11-20 11:22:15.259530] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.420 [2024-11-20 11:22:15.431693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.679 [2024-11-20 11:22:15.551697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.679 [2024-11-20 11:22:15.758907] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.679 [2024-11-20 11:22:15.758954] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.247 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.247 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:33.247 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:33.247 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.247 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.247 [2024-11-20 11:22:16.203200] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:33.247 [2024-11-20 11:22:16.203257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:33.247 [2024-11-20 11:22:16.203268] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:33.247 [2024-11-20 11:22:16.203278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:33.247 [2024-11-20 11:22:16.203285] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:33.247 [2024-11-20 11:22:16.203294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:33.247 [2024-11-20 11:22:16.203300] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:33.247 [2024-11-20 11:22:16.203309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:33.247 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.247 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:33.247 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.247 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.247 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:33.247 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.247 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.247 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.247 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.247 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.247 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.247 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.247 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.247 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.247 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.247 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.247 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.247 "name": "Existed_Raid", 00:12:33.247 "uuid": "f9235e28-4456-4093-88b7-c0f60d4aa3e1", 00:12:33.247 "strip_size_kb": 64, 00:12:33.247 "state": "configuring", 00:12:33.247 "raid_level": "concat", 00:12:33.247 "superblock": true, 00:12:33.247 "num_base_bdevs": 4, 00:12:33.247 "num_base_bdevs_discovered": 0, 00:12:33.247 "num_base_bdevs_operational": 4, 00:12:33.247 "base_bdevs_list": [ 00:12:33.247 { 00:12:33.247 "name": "BaseBdev1", 00:12:33.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.247 "is_configured": false, 00:12:33.247 "data_offset": 0, 00:12:33.247 "data_size": 0 00:12:33.247 }, 00:12:33.247 { 00:12:33.247 "name": "BaseBdev2", 00:12:33.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.247 "is_configured": false, 00:12:33.247 "data_offset": 0, 00:12:33.247 "data_size": 0 00:12:33.247 }, 00:12:33.247 { 00:12:33.247 "name": "BaseBdev3", 00:12:33.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.247 "is_configured": false, 00:12:33.247 "data_offset": 0, 00:12:33.247 "data_size": 0 00:12:33.247 }, 00:12:33.247 { 00:12:33.247 "name": "BaseBdev4", 00:12:33.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.247 "is_configured": false, 00:12:33.247 "data_offset": 0, 00:12:33.247 "data_size": 0 00:12:33.247 } 00:12:33.247 ] 00:12:33.247 }' 00:12:33.247 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.247 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.816 [2024-11-20 11:22:16.646391] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:33.816 [2024-11-20 11:22:16.646503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.816 [2024-11-20 11:22:16.658388] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:33.816 [2024-11-20 11:22:16.658438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:33.816 [2024-11-20 11:22:16.658448] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:33.816 [2024-11-20 11:22:16.658474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:33.816 [2024-11-20 11:22:16.658482] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:33.816 [2024-11-20 11:22:16.658492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:33.816 [2024-11-20 11:22:16.658499] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:33.816 [2024-11-20 11:22:16.658509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.816 [2024-11-20 11:22:16.706562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:33.816 BaseBdev1 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.816 [ 00:12:33.816 { 00:12:33.816 "name": "BaseBdev1", 00:12:33.816 "aliases": [ 00:12:33.816 "d8c57cef-bebe-41ba-b291-b9167774a3f1" 00:12:33.816 ], 00:12:33.816 "product_name": "Malloc disk", 00:12:33.816 "block_size": 512, 00:12:33.816 "num_blocks": 65536, 00:12:33.816 "uuid": "d8c57cef-bebe-41ba-b291-b9167774a3f1", 00:12:33.816 "assigned_rate_limits": { 00:12:33.816 "rw_ios_per_sec": 0, 00:12:33.816 "rw_mbytes_per_sec": 0, 00:12:33.816 "r_mbytes_per_sec": 0, 00:12:33.816 "w_mbytes_per_sec": 0 00:12:33.816 }, 00:12:33.816 "claimed": true, 00:12:33.816 "claim_type": "exclusive_write", 00:12:33.816 "zoned": false, 00:12:33.816 "supported_io_types": { 00:12:33.816 "read": true, 00:12:33.816 "write": true, 00:12:33.816 "unmap": true, 00:12:33.816 "flush": true, 00:12:33.816 "reset": true, 00:12:33.816 "nvme_admin": false, 00:12:33.816 "nvme_io": false, 00:12:33.816 "nvme_io_md": false, 00:12:33.816 "write_zeroes": true, 00:12:33.816 "zcopy": true, 00:12:33.816 "get_zone_info": false, 00:12:33.816 "zone_management": false, 00:12:33.816 "zone_append": false, 00:12:33.816 "compare": false, 00:12:33.816 "compare_and_write": false, 00:12:33.816 "abort": true, 00:12:33.816 "seek_hole": false, 00:12:33.816 "seek_data": false, 00:12:33.816 "copy": true, 00:12:33.816 "nvme_iov_md": false 00:12:33.816 }, 00:12:33.816 "memory_domains": [ 00:12:33.816 { 00:12:33.816 "dma_device_id": "system", 00:12:33.816 "dma_device_type": 1 00:12:33.816 }, 00:12:33.816 { 00:12:33.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.816 "dma_device_type": 2 00:12:33.816 } 00:12:33.816 ], 00:12:33.816 "driver_specific": {} 00:12:33.816 } 00:12:33.816 ] 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.816 "name": "Existed_Raid", 00:12:33.816 "uuid": "6fab9b1a-363b-47cc-a4e2-57060bd9374f", 00:12:33.816 "strip_size_kb": 64, 00:12:33.816 "state": "configuring", 00:12:33.816 "raid_level": "concat", 00:12:33.816 "superblock": true, 00:12:33.816 "num_base_bdevs": 4, 00:12:33.816 "num_base_bdevs_discovered": 1, 00:12:33.816 "num_base_bdevs_operational": 4, 00:12:33.816 "base_bdevs_list": [ 00:12:33.816 { 00:12:33.816 "name": "BaseBdev1", 00:12:33.816 "uuid": "d8c57cef-bebe-41ba-b291-b9167774a3f1", 00:12:33.816 "is_configured": true, 00:12:33.816 "data_offset": 2048, 00:12:33.816 "data_size": 63488 00:12:33.816 }, 00:12:33.816 { 00:12:33.816 "name": "BaseBdev2", 00:12:33.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.816 "is_configured": false, 00:12:33.816 "data_offset": 0, 00:12:33.816 "data_size": 0 00:12:33.816 }, 00:12:33.816 { 00:12:33.816 "name": "BaseBdev3", 00:12:33.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.816 "is_configured": false, 00:12:33.816 "data_offset": 0, 00:12:33.816 "data_size": 0 00:12:33.816 }, 00:12:33.816 { 00:12:33.816 "name": "BaseBdev4", 00:12:33.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.816 "is_configured": false, 00:12:33.816 "data_offset": 0, 00:12:33.816 "data_size": 0 00:12:33.816 } 00:12:33.816 ] 00:12:33.816 }' 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.816 11:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.075 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:34.075 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.075 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.334 [2024-11-20 11:22:17.189811] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:34.334 [2024-11-20 11:22:17.189868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:34.334 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.334 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:34.334 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.334 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.334 [2024-11-20 11:22:17.201854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:34.334 [2024-11-20 11:22:17.203727] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:34.334 [2024-11-20 11:22:17.203775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:34.334 [2024-11-20 11:22:17.203786] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:34.334 [2024-11-20 11:22:17.203798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:34.334 [2024-11-20 11:22:17.203806] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:34.334 [2024-11-20 11:22:17.203816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:34.334 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.334 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:34.334 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:34.334 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:34.335 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.335 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.335 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:34.335 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.335 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.335 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.335 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.335 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.335 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.335 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.335 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.335 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.335 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.335 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.335 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.335 "name": "Existed_Raid", 00:12:34.335 "uuid": "cc8050b7-3dfc-4e5d-a732-afa0caf33288", 00:12:34.335 "strip_size_kb": 64, 00:12:34.335 "state": "configuring", 00:12:34.335 "raid_level": "concat", 00:12:34.335 "superblock": true, 00:12:34.335 "num_base_bdevs": 4, 00:12:34.335 "num_base_bdevs_discovered": 1, 00:12:34.335 "num_base_bdevs_operational": 4, 00:12:34.335 "base_bdevs_list": [ 00:12:34.335 { 00:12:34.335 "name": "BaseBdev1", 00:12:34.335 "uuid": "d8c57cef-bebe-41ba-b291-b9167774a3f1", 00:12:34.335 "is_configured": true, 00:12:34.335 "data_offset": 2048, 00:12:34.335 "data_size": 63488 00:12:34.335 }, 00:12:34.335 { 00:12:34.335 "name": "BaseBdev2", 00:12:34.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.335 "is_configured": false, 00:12:34.335 "data_offset": 0, 00:12:34.335 "data_size": 0 00:12:34.335 }, 00:12:34.335 { 00:12:34.335 "name": "BaseBdev3", 00:12:34.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.335 "is_configured": false, 00:12:34.335 "data_offset": 0, 00:12:34.335 "data_size": 0 00:12:34.335 }, 00:12:34.335 { 00:12:34.335 "name": "BaseBdev4", 00:12:34.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.335 "is_configured": false, 00:12:34.335 "data_offset": 0, 00:12:34.335 "data_size": 0 00:12:34.335 } 00:12:34.335 ] 00:12:34.335 }' 00:12:34.335 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.335 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.593 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:34.593 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.593 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.593 [2024-11-20 11:22:17.694974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:34.593 BaseBdev2 00:12:34.593 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.593 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:34.593 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:34.593 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:34.593 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:34.593 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:34.594 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:34.594 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:34.594 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.594 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.854 [ 00:12:34.854 { 00:12:34.854 "name": "BaseBdev2", 00:12:34.854 "aliases": [ 00:12:34.854 "6b0b7aeb-d367-45cf-b992-01d620e7f35a" 00:12:34.854 ], 00:12:34.854 "product_name": "Malloc disk", 00:12:34.854 "block_size": 512, 00:12:34.854 "num_blocks": 65536, 00:12:34.854 "uuid": "6b0b7aeb-d367-45cf-b992-01d620e7f35a", 00:12:34.854 "assigned_rate_limits": { 00:12:34.854 "rw_ios_per_sec": 0, 00:12:34.854 "rw_mbytes_per_sec": 0, 00:12:34.854 "r_mbytes_per_sec": 0, 00:12:34.854 "w_mbytes_per_sec": 0 00:12:34.854 }, 00:12:34.854 "claimed": true, 00:12:34.854 "claim_type": "exclusive_write", 00:12:34.854 "zoned": false, 00:12:34.854 "supported_io_types": { 00:12:34.854 "read": true, 00:12:34.854 "write": true, 00:12:34.854 "unmap": true, 00:12:34.854 "flush": true, 00:12:34.854 "reset": true, 00:12:34.854 "nvme_admin": false, 00:12:34.854 "nvme_io": false, 00:12:34.854 "nvme_io_md": false, 00:12:34.854 "write_zeroes": true, 00:12:34.854 "zcopy": true, 00:12:34.854 "get_zone_info": false, 00:12:34.854 "zone_management": false, 00:12:34.854 "zone_append": false, 00:12:34.854 "compare": false, 00:12:34.854 "compare_and_write": false, 00:12:34.854 "abort": true, 00:12:34.854 "seek_hole": false, 00:12:34.854 "seek_data": false, 00:12:34.854 "copy": true, 00:12:34.854 "nvme_iov_md": false 00:12:34.854 }, 00:12:34.854 "memory_domains": [ 00:12:34.854 { 00:12:34.854 "dma_device_id": "system", 00:12:34.854 "dma_device_type": 1 00:12:34.854 }, 00:12:34.854 { 00:12:34.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.854 "dma_device_type": 2 00:12:34.854 } 00:12:34.854 ], 00:12:34.854 "driver_specific": {} 00:12:34.854 } 00:12:34.854 ] 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.854 "name": "Existed_Raid", 00:12:34.854 "uuid": "cc8050b7-3dfc-4e5d-a732-afa0caf33288", 00:12:34.854 "strip_size_kb": 64, 00:12:34.854 "state": "configuring", 00:12:34.854 "raid_level": "concat", 00:12:34.854 "superblock": true, 00:12:34.854 "num_base_bdevs": 4, 00:12:34.854 "num_base_bdevs_discovered": 2, 00:12:34.854 "num_base_bdevs_operational": 4, 00:12:34.854 "base_bdevs_list": [ 00:12:34.854 { 00:12:34.854 "name": "BaseBdev1", 00:12:34.854 "uuid": "d8c57cef-bebe-41ba-b291-b9167774a3f1", 00:12:34.854 "is_configured": true, 00:12:34.854 "data_offset": 2048, 00:12:34.854 "data_size": 63488 00:12:34.854 }, 00:12:34.854 { 00:12:34.854 "name": "BaseBdev2", 00:12:34.854 "uuid": "6b0b7aeb-d367-45cf-b992-01d620e7f35a", 00:12:34.854 "is_configured": true, 00:12:34.854 "data_offset": 2048, 00:12:34.854 "data_size": 63488 00:12:34.854 }, 00:12:34.854 { 00:12:34.854 "name": "BaseBdev3", 00:12:34.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.854 "is_configured": false, 00:12:34.854 "data_offset": 0, 00:12:34.854 "data_size": 0 00:12:34.854 }, 00:12:34.854 { 00:12:34.854 "name": "BaseBdev4", 00:12:34.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.854 "is_configured": false, 00:12:34.854 "data_offset": 0, 00:12:34.854 "data_size": 0 00:12:34.854 } 00:12:34.854 ] 00:12:34.854 }' 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.854 11:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.114 [2024-11-20 11:22:18.183885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:35.114 BaseBdev3 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.114 [ 00:12:35.114 { 00:12:35.114 "name": "BaseBdev3", 00:12:35.114 "aliases": [ 00:12:35.114 "3c504781-1f1f-46ed-b5d0-84e5b7216918" 00:12:35.114 ], 00:12:35.114 "product_name": "Malloc disk", 00:12:35.114 "block_size": 512, 00:12:35.114 "num_blocks": 65536, 00:12:35.114 "uuid": "3c504781-1f1f-46ed-b5d0-84e5b7216918", 00:12:35.114 "assigned_rate_limits": { 00:12:35.114 "rw_ios_per_sec": 0, 00:12:35.114 "rw_mbytes_per_sec": 0, 00:12:35.114 "r_mbytes_per_sec": 0, 00:12:35.114 "w_mbytes_per_sec": 0 00:12:35.114 }, 00:12:35.114 "claimed": true, 00:12:35.114 "claim_type": "exclusive_write", 00:12:35.114 "zoned": false, 00:12:35.114 "supported_io_types": { 00:12:35.114 "read": true, 00:12:35.114 "write": true, 00:12:35.114 "unmap": true, 00:12:35.114 "flush": true, 00:12:35.114 "reset": true, 00:12:35.114 "nvme_admin": false, 00:12:35.114 "nvme_io": false, 00:12:35.114 "nvme_io_md": false, 00:12:35.114 "write_zeroes": true, 00:12:35.114 "zcopy": true, 00:12:35.114 "get_zone_info": false, 00:12:35.114 "zone_management": false, 00:12:35.114 "zone_append": false, 00:12:35.114 "compare": false, 00:12:35.114 "compare_and_write": false, 00:12:35.114 "abort": true, 00:12:35.114 "seek_hole": false, 00:12:35.114 "seek_data": false, 00:12:35.114 "copy": true, 00:12:35.114 "nvme_iov_md": false 00:12:35.114 }, 00:12:35.114 "memory_domains": [ 00:12:35.114 { 00:12:35.114 "dma_device_id": "system", 00:12:35.114 "dma_device_type": 1 00:12:35.114 }, 00:12:35.114 { 00:12:35.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.114 "dma_device_type": 2 00:12:35.114 } 00:12:35.114 ], 00:12:35.114 "driver_specific": {} 00:12:35.114 } 00:12:35.114 ] 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.114 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.374 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.374 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.374 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.374 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.374 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.374 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.374 "name": "Existed_Raid", 00:12:35.374 "uuid": "cc8050b7-3dfc-4e5d-a732-afa0caf33288", 00:12:35.374 "strip_size_kb": 64, 00:12:35.374 "state": "configuring", 00:12:35.374 "raid_level": "concat", 00:12:35.374 "superblock": true, 00:12:35.374 "num_base_bdevs": 4, 00:12:35.374 "num_base_bdevs_discovered": 3, 00:12:35.374 "num_base_bdevs_operational": 4, 00:12:35.374 "base_bdevs_list": [ 00:12:35.374 { 00:12:35.374 "name": "BaseBdev1", 00:12:35.374 "uuid": "d8c57cef-bebe-41ba-b291-b9167774a3f1", 00:12:35.374 "is_configured": true, 00:12:35.374 "data_offset": 2048, 00:12:35.374 "data_size": 63488 00:12:35.374 }, 00:12:35.374 { 00:12:35.374 "name": "BaseBdev2", 00:12:35.374 "uuid": "6b0b7aeb-d367-45cf-b992-01d620e7f35a", 00:12:35.374 "is_configured": true, 00:12:35.374 "data_offset": 2048, 00:12:35.374 "data_size": 63488 00:12:35.374 }, 00:12:35.374 { 00:12:35.374 "name": "BaseBdev3", 00:12:35.374 "uuid": "3c504781-1f1f-46ed-b5d0-84e5b7216918", 00:12:35.374 "is_configured": true, 00:12:35.374 "data_offset": 2048, 00:12:35.374 "data_size": 63488 00:12:35.374 }, 00:12:35.374 { 00:12:35.374 "name": "BaseBdev4", 00:12:35.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.374 "is_configured": false, 00:12:35.374 "data_offset": 0, 00:12:35.374 "data_size": 0 00:12:35.374 } 00:12:35.374 ] 00:12:35.374 }' 00:12:35.374 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.374 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.633 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:35.633 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.633 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.633 [2024-11-20 11:22:18.684381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:35.633 [2024-11-20 11:22:18.684888] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:35.633 [2024-11-20 11:22:18.684948] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:35.633 [2024-11-20 11:22:18.685301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:35.633 [2024-11-20 11:22:18.685529] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:35.633 [2024-11-20 11:22:18.685584] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:35.633 BaseBdev4 00:12:35.633 [2024-11-20 11:22:18.685779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.633 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.633 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:35.633 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:35.633 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:35.633 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:35.633 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:35.633 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:35.633 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:35.633 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.633 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.633 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.633 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:35.633 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.633 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.633 [ 00:12:35.633 { 00:12:35.633 "name": "BaseBdev4", 00:12:35.633 "aliases": [ 00:12:35.633 "b84207eb-e580-4a41-9766-b87d0fe63916" 00:12:35.633 ], 00:12:35.633 "product_name": "Malloc disk", 00:12:35.633 "block_size": 512, 00:12:35.633 "num_blocks": 65536, 00:12:35.633 "uuid": "b84207eb-e580-4a41-9766-b87d0fe63916", 00:12:35.633 "assigned_rate_limits": { 00:12:35.633 "rw_ios_per_sec": 0, 00:12:35.633 "rw_mbytes_per_sec": 0, 00:12:35.633 "r_mbytes_per_sec": 0, 00:12:35.633 "w_mbytes_per_sec": 0 00:12:35.633 }, 00:12:35.633 "claimed": true, 00:12:35.633 "claim_type": "exclusive_write", 00:12:35.633 "zoned": false, 00:12:35.633 "supported_io_types": { 00:12:35.633 "read": true, 00:12:35.633 "write": true, 00:12:35.633 "unmap": true, 00:12:35.633 "flush": true, 00:12:35.633 "reset": true, 00:12:35.633 "nvme_admin": false, 00:12:35.633 "nvme_io": false, 00:12:35.633 "nvme_io_md": false, 00:12:35.633 "write_zeroes": true, 00:12:35.633 "zcopy": true, 00:12:35.633 "get_zone_info": false, 00:12:35.633 "zone_management": false, 00:12:35.633 "zone_append": false, 00:12:35.633 "compare": false, 00:12:35.633 "compare_and_write": false, 00:12:35.633 "abort": true, 00:12:35.633 "seek_hole": false, 00:12:35.633 "seek_data": false, 00:12:35.633 "copy": true, 00:12:35.633 "nvme_iov_md": false 00:12:35.633 }, 00:12:35.633 "memory_domains": [ 00:12:35.633 { 00:12:35.633 "dma_device_id": "system", 00:12:35.633 "dma_device_type": 1 00:12:35.633 }, 00:12:35.633 { 00:12:35.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.633 "dma_device_type": 2 00:12:35.634 } 00:12:35.634 ], 00:12:35.634 "driver_specific": {} 00:12:35.634 } 00:12:35.634 ] 00:12:35.634 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.634 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:35.634 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:35.634 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:35.634 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:35.634 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.634 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.634 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:35.634 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.634 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.634 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.634 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.634 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.634 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.634 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.634 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.634 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.634 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.892 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.892 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.892 "name": "Existed_Raid", 00:12:35.892 "uuid": "cc8050b7-3dfc-4e5d-a732-afa0caf33288", 00:12:35.892 "strip_size_kb": 64, 00:12:35.892 "state": "online", 00:12:35.892 "raid_level": "concat", 00:12:35.892 "superblock": true, 00:12:35.892 "num_base_bdevs": 4, 00:12:35.892 "num_base_bdevs_discovered": 4, 00:12:35.892 "num_base_bdevs_operational": 4, 00:12:35.892 "base_bdevs_list": [ 00:12:35.892 { 00:12:35.892 "name": "BaseBdev1", 00:12:35.892 "uuid": "d8c57cef-bebe-41ba-b291-b9167774a3f1", 00:12:35.892 "is_configured": true, 00:12:35.892 "data_offset": 2048, 00:12:35.892 "data_size": 63488 00:12:35.892 }, 00:12:35.892 { 00:12:35.892 "name": "BaseBdev2", 00:12:35.892 "uuid": "6b0b7aeb-d367-45cf-b992-01d620e7f35a", 00:12:35.892 "is_configured": true, 00:12:35.892 "data_offset": 2048, 00:12:35.892 "data_size": 63488 00:12:35.892 }, 00:12:35.892 { 00:12:35.892 "name": "BaseBdev3", 00:12:35.892 "uuid": "3c504781-1f1f-46ed-b5d0-84e5b7216918", 00:12:35.892 "is_configured": true, 00:12:35.892 "data_offset": 2048, 00:12:35.892 "data_size": 63488 00:12:35.892 }, 00:12:35.892 { 00:12:35.892 "name": "BaseBdev4", 00:12:35.892 "uuid": "b84207eb-e580-4a41-9766-b87d0fe63916", 00:12:35.892 "is_configured": true, 00:12:35.892 "data_offset": 2048, 00:12:35.892 "data_size": 63488 00:12:35.892 } 00:12:35.892 ] 00:12:35.892 }' 00:12:35.892 11:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.892 11:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.151 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:36.151 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:36.151 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:36.151 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:36.151 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:36.151 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:36.151 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:36.151 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.151 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.151 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:36.151 [2024-11-20 11:22:19.180032] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:36.151 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.151 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:36.151 "name": "Existed_Raid", 00:12:36.151 "aliases": [ 00:12:36.151 "cc8050b7-3dfc-4e5d-a732-afa0caf33288" 00:12:36.151 ], 00:12:36.151 "product_name": "Raid Volume", 00:12:36.151 "block_size": 512, 00:12:36.151 "num_blocks": 253952, 00:12:36.151 "uuid": "cc8050b7-3dfc-4e5d-a732-afa0caf33288", 00:12:36.151 "assigned_rate_limits": { 00:12:36.151 "rw_ios_per_sec": 0, 00:12:36.151 "rw_mbytes_per_sec": 0, 00:12:36.151 "r_mbytes_per_sec": 0, 00:12:36.151 "w_mbytes_per_sec": 0 00:12:36.151 }, 00:12:36.151 "claimed": false, 00:12:36.151 "zoned": false, 00:12:36.151 "supported_io_types": { 00:12:36.151 "read": true, 00:12:36.151 "write": true, 00:12:36.151 "unmap": true, 00:12:36.151 "flush": true, 00:12:36.151 "reset": true, 00:12:36.151 "nvme_admin": false, 00:12:36.151 "nvme_io": false, 00:12:36.151 "nvme_io_md": false, 00:12:36.151 "write_zeroes": true, 00:12:36.151 "zcopy": false, 00:12:36.151 "get_zone_info": false, 00:12:36.151 "zone_management": false, 00:12:36.151 "zone_append": false, 00:12:36.151 "compare": false, 00:12:36.151 "compare_and_write": false, 00:12:36.151 "abort": false, 00:12:36.151 "seek_hole": false, 00:12:36.151 "seek_data": false, 00:12:36.151 "copy": false, 00:12:36.151 "nvme_iov_md": false 00:12:36.151 }, 00:12:36.151 "memory_domains": [ 00:12:36.151 { 00:12:36.151 "dma_device_id": "system", 00:12:36.151 "dma_device_type": 1 00:12:36.151 }, 00:12:36.151 { 00:12:36.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.151 "dma_device_type": 2 00:12:36.151 }, 00:12:36.151 { 00:12:36.151 "dma_device_id": "system", 00:12:36.151 "dma_device_type": 1 00:12:36.151 }, 00:12:36.151 { 00:12:36.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.151 "dma_device_type": 2 00:12:36.151 }, 00:12:36.151 { 00:12:36.151 "dma_device_id": "system", 00:12:36.151 "dma_device_type": 1 00:12:36.151 }, 00:12:36.151 { 00:12:36.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.151 "dma_device_type": 2 00:12:36.151 }, 00:12:36.151 { 00:12:36.151 "dma_device_id": "system", 00:12:36.151 "dma_device_type": 1 00:12:36.151 }, 00:12:36.151 { 00:12:36.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.151 "dma_device_type": 2 00:12:36.151 } 00:12:36.151 ], 00:12:36.151 "driver_specific": { 00:12:36.151 "raid": { 00:12:36.151 "uuid": "cc8050b7-3dfc-4e5d-a732-afa0caf33288", 00:12:36.151 "strip_size_kb": 64, 00:12:36.151 "state": "online", 00:12:36.151 "raid_level": "concat", 00:12:36.151 "superblock": true, 00:12:36.151 "num_base_bdevs": 4, 00:12:36.151 "num_base_bdevs_discovered": 4, 00:12:36.151 "num_base_bdevs_operational": 4, 00:12:36.151 "base_bdevs_list": [ 00:12:36.151 { 00:12:36.152 "name": "BaseBdev1", 00:12:36.152 "uuid": "d8c57cef-bebe-41ba-b291-b9167774a3f1", 00:12:36.152 "is_configured": true, 00:12:36.152 "data_offset": 2048, 00:12:36.152 "data_size": 63488 00:12:36.152 }, 00:12:36.152 { 00:12:36.152 "name": "BaseBdev2", 00:12:36.152 "uuid": "6b0b7aeb-d367-45cf-b992-01d620e7f35a", 00:12:36.152 "is_configured": true, 00:12:36.152 "data_offset": 2048, 00:12:36.152 "data_size": 63488 00:12:36.152 }, 00:12:36.152 { 00:12:36.152 "name": "BaseBdev3", 00:12:36.152 "uuid": "3c504781-1f1f-46ed-b5d0-84e5b7216918", 00:12:36.152 "is_configured": true, 00:12:36.152 "data_offset": 2048, 00:12:36.152 "data_size": 63488 00:12:36.152 }, 00:12:36.152 { 00:12:36.152 "name": "BaseBdev4", 00:12:36.152 "uuid": "b84207eb-e580-4a41-9766-b87d0fe63916", 00:12:36.152 "is_configured": true, 00:12:36.152 "data_offset": 2048, 00:12:36.152 "data_size": 63488 00:12:36.152 } 00:12:36.152 ] 00:12:36.152 } 00:12:36.152 } 00:12:36.152 }' 00:12:36.152 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:36.410 BaseBdev2 00:12:36.410 BaseBdev3 00:12:36.410 BaseBdev4' 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.410 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.670 [2024-11-20 11:22:19.539169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:36.670 [2024-11-20 11:22:19.539250] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:36.670 [2024-11-20 11:22:19.539347] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.670 "name": "Existed_Raid", 00:12:36.670 "uuid": "cc8050b7-3dfc-4e5d-a732-afa0caf33288", 00:12:36.670 "strip_size_kb": 64, 00:12:36.670 "state": "offline", 00:12:36.670 "raid_level": "concat", 00:12:36.670 "superblock": true, 00:12:36.670 "num_base_bdevs": 4, 00:12:36.670 "num_base_bdevs_discovered": 3, 00:12:36.670 "num_base_bdevs_operational": 3, 00:12:36.670 "base_bdevs_list": [ 00:12:36.670 { 00:12:36.670 "name": null, 00:12:36.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.670 "is_configured": false, 00:12:36.670 "data_offset": 0, 00:12:36.670 "data_size": 63488 00:12:36.670 }, 00:12:36.670 { 00:12:36.670 "name": "BaseBdev2", 00:12:36.670 "uuid": "6b0b7aeb-d367-45cf-b992-01d620e7f35a", 00:12:36.670 "is_configured": true, 00:12:36.670 "data_offset": 2048, 00:12:36.670 "data_size": 63488 00:12:36.670 }, 00:12:36.670 { 00:12:36.670 "name": "BaseBdev3", 00:12:36.670 "uuid": "3c504781-1f1f-46ed-b5d0-84e5b7216918", 00:12:36.670 "is_configured": true, 00:12:36.670 "data_offset": 2048, 00:12:36.670 "data_size": 63488 00:12:36.670 }, 00:12:36.670 { 00:12:36.670 "name": "BaseBdev4", 00:12:36.670 "uuid": "b84207eb-e580-4a41-9766-b87d0fe63916", 00:12:36.670 "is_configured": true, 00:12:36.670 "data_offset": 2048, 00:12:36.670 "data_size": 63488 00:12:36.670 } 00:12:36.670 ] 00:12:36.670 }' 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.670 11:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.239 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:37.239 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:37.239 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.239 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:37.239 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.239 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.239 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.239 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:37.239 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:37.239 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:37.239 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.239 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.239 [2024-11-20 11:22:20.159230] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:37.239 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.239 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:37.239 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:37.239 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.239 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:37.239 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.239 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.239 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.239 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:37.239 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:37.239 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:37.239 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.239 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.239 [2024-11-20 11:22:20.319630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:37.498 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.498 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:37.498 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:37.498 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.498 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:37.498 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.498 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.498 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.498 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:37.498 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:37.498 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:37.498 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.498 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.498 [2024-11-20 11:22:20.478968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:37.498 [2024-11-20 11:22:20.479021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:37.498 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.498 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:37.498 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:37.498 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.498 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.498 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.498 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:37.498 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.758 BaseBdev2 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.758 [ 00:12:37.758 { 00:12:37.758 "name": "BaseBdev2", 00:12:37.758 "aliases": [ 00:12:37.758 "05b0c780-efdc-4692-b0f5-50f1b179ef55" 00:12:37.758 ], 00:12:37.758 "product_name": "Malloc disk", 00:12:37.758 "block_size": 512, 00:12:37.758 "num_blocks": 65536, 00:12:37.758 "uuid": "05b0c780-efdc-4692-b0f5-50f1b179ef55", 00:12:37.758 "assigned_rate_limits": { 00:12:37.758 "rw_ios_per_sec": 0, 00:12:37.758 "rw_mbytes_per_sec": 0, 00:12:37.758 "r_mbytes_per_sec": 0, 00:12:37.758 "w_mbytes_per_sec": 0 00:12:37.758 }, 00:12:37.758 "claimed": false, 00:12:37.758 "zoned": false, 00:12:37.758 "supported_io_types": { 00:12:37.758 "read": true, 00:12:37.758 "write": true, 00:12:37.758 "unmap": true, 00:12:37.758 "flush": true, 00:12:37.758 "reset": true, 00:12:37.758 "nvme_admin": false, 00:12:37.758 "nvme_io": false, 00:12:37.758 "nvme_io_md": false, 00:12:37.758 "write_zeroes": true, 00:12:37.758 "zcopy": true, 00:12:37.758 "get_zone_info": false, 00:12:37.758 "zone_management": false, 00:12:37.758 "zone_append": false, 00:12:37.758 "compare": false, 00:12:37.758 "compare_and_write": false, 00:12:37.758 "abort": true, 00:12:37.758 "seek_hole": false, 00:12:37.758 "seek_data": false, 00:12:37.758 "copy": true, 00:12:37.758 "nvme_iov_md": false 00:12:37.758 }, 00:12:37.758 "memory_domains": [ 00:12:37.758 { 00:12:37.758 "dma_device_id": "system", 00:12:37.758 "dma_device_type": 1 00:12:37.758 }, 00:12:37.758 { 00:12:37.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.758 "dma_device_type": 2 00:12:37.758 } 00:12:37.758 ], 00:12:37.758 "driver_specific": {} 00:12:37.758 } 00:12:37.758 ] 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.758 BaseBdev3 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.758 [ 00:12:37.758 { 00:12:37.758 "name": "BaseBdev3", 00:12:37.758 "aliases": [ 00:12:37.758 "965ccb9d-c145-4ff3-95b5-400382d24cb2" 00:12:37.758 ], 00:12:37.758 "product_name": "Malloc disk", 00:12:37.758 "block_size": 512, 00:12:37.758 "num_blocks": 65536, 00:12:37.758 "uuid": "965ccb9d-c145-4ff3-95b5-400382d24cb2", 00:12:37.758 "assigned_rate_limits": { 00:12:37.758 "rw_ios_per_sec": 0, 00:12:37.758 "rw_mbytes_per_sec": 0, 00:12:37.758 "r_mbytes_per_sec": 0, 00:12:37.758 "w_mbytes_per_sec": 0 00:12:37.758 }, 00:12:37.758 "claimed": false, 00:12:37.758 "zoned": false, 00:12:37.758 "supported_io_types": { 00:12:37.758 "read": true, 00:12:37.758 "write": true, 00:12:37.758 "unmap": true, 00:12:37.758 "flush": true, 00:12:37.758 "reset": true, 00:12:37.758 "nvme_admin": false, 00:12:37.758 "nvme_io": false, 00:12:37.758 "nvme_io_md": false, 00:12:37.758 "write_zeroes": true, 00:12:37.758 "zcopy": true, 00:12:37.758 "get_zone_info": false, 00:12:37.758 "zone_management": false, 00:12:37.758 "zone_append": false, 00:12:37.758 "compare": false, 00:12:37.758 "compare_and_write": false, 00:12:37.758 "abort": true, 00:12:37.758 "seek_hole": false, 00:12:37.758 "seek_data": false, 00:12:37.758 "copy": true, 00:12:37.758 "nvme_iov_md": false 00:12:37.758 }, 00:12:37.758 "memory_domains": [ 00:12:37.758 { 00:12:37.758 "dma_device_id": "system", 00:12:37.758 "dma_device_type": 1 00:12:37.758 }, 00:12:37.758 { 00:12:37.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.758 "dma_device_type": 2 00:12:37.758 } 00:12:37.758 ], 00:12:37.758 "driver_specific": {} 00:12:37.758 } 00:12:37.758 ] 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:37.758 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:37.759 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:37.759 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:37.759 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.759 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.759 BaseBdev4 00:12:37.759 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.759 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:37.759 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:37.759 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:37.759 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:37.759 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:37.759 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:37.759 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:37.759 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.759 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.759 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.759 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:37.759 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.759 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.759 [ 00:12:37.759 { 00:12:37.759 "name": "BaseBdev4", 00:12:37.759 "aliases": [ 00:12:37.759 "dd0a24b0-5a42-4d48-84af-44ecb223bc11" 00:12:37.759 ], 00:12:37.759 "product_name": "Malloc disk", 00:12:37.759 "block_size": 512, 00:12:37.759 "num_blocks": 65536, 00:12:37.759 "uuid": "dd0a24b0-5a42-4d48-84af-44ecb223bc11", 00:12:37.759 "assigned_rate_limits": { 00:12:37.759 "rw_ios_per_sec": 0, 00:12:37.759 "rw_mbytes_per_sec": 0, 00:12:37.759 "r_mbytes_per_sec": 0, 00:12:37.759 "w_mbytes_per_sec": 0 00:12:37.759 }, 00:12:37.759 "claimed": false, 00:12:37.759 "zoned": false, 00:12:37.759 "supported_io_types": { 00:12:37.759 "read": true, 00:12:37.759 "write": true, 00:12:37.759 "unmap": true, 00:12:38.019 "flush": true, 00:12:38.019 "reset": true, 00:12:38.019 "nvme_admin": false, 00:12:38.019 "nvme_io": false, 00:12:38.019 "nvme_io_md": false, 00:12:38.019 "write_zeroes": true, 00:12:38.019 "zcopy": true, 00:12:38.019 "get_zone_info": false, 00:12:38.019 "zone_management": false, 00:12:38.019 "zone_append": false, 00:12:38.019 "compare": false, 00:12:38.019 "compare_and_write": false, 00:12:38.019 "abort": true, 00:12:38.019 "seek_hole": false, 00:12:38.019 "seek_data": false, 00:12:38.019 "copy": true, 00:12:38.019 "nvme_iov_md": false 00:12:38.019 }, 00:12:38.019 "memory_domains": [ 00:12:38.019 { 00:12:38.019 "dma_device_id": "system", 00:12:38.019 "dma_device_type": 1 00:12:38.019 }, 00:12:38.019 { 00:12:38.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.019 "dma_device_type": 2 00:12:38.019 } 00:12:38.019 ], 00:12:38.019 "driver_specific": {} 00:12:38.019 } 00:12:38.019 ] 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.019 [2024-11-20 11:22:20.884512] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:38.019 [2024-11-20 11:22:20.884615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:38.019 [2024-11-20 11:22:20.884674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:38.019 [2024-11-20 11:22:20.886664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:38.019 [2024-11-20 11:22:20.886775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.019 "name": "Existed_Raid", 00:12:38.019 "uuid": "483c8498-0f9a-442c-bb88-a2803cfc2a2f", 00:12:38.019 "strip_size_kb": 64, 00:12:38.019 "state": "configuring", 00:12:38.019 "raid_level": "concat", 00:12:38.019 "superblock": true, 00:12:38.019 "num_base_bdevs": 4, 00:12:38.019 "num_base_bdevs_discovered": 3, 00:12:38.019 "num_base_bdevs_operational": 4, 00:12:38.019 "base_bdevs_list": [ 00:12:38.019 { 00:12:38.019 "name": "BaseBdev1", 00:12:38.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.019 "is_configured": false, 00:12:38.019 "data_offset": 0, 00:12:38.019 "data_size": 0 00:12:38.019 }, 00:12:38.019 { 00:12:38.019 "name": "BaseBdev2", 00:12:38.019 "uuid": "05b0c780-efdc-4692-b0f5-50f1b179ef55", 00:12:38.019 "is_configured": true, 00:12:38.019 "data_offset": 2048, 00:12:38.019 "data_size": 63488 00:12:38.019 }, 00:12:38.019 { 00:12:38.019 "name": "BaseBdev3", 00:12:38.019 "uuid": "965ccb9d-c145-4ff3-95b5-400382d24cb2", 00:12:38.019 "is_configured": true, 00:12:38.019 "data_offset": 2048, 00:12:38.019 "data_size": 63488 00:12:38.019 }, 00:12:38.019 { 00:12:38.019 "name": "BaseBdev4", 00:12:38.019 "uuid": "dd0a24b0-5a42-4d48-84af-44ecb223bc11", 00:12:38.019 "is_configured": true, 00:12:38.019 "data_offset": 2048, 00:12:38.019 "data_size": 63488 00:12:38.019 } 00:12:38.019 ] 00:12:38.019 }' 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.019 11:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.302 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:38.302 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.302 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.302 [2024-11-20 11:22:21.343725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:38.302 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.302 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:38.302 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.302 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.302 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:38.302 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.302 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.302 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.302 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.302 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.302 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.302 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.302 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.302 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.302 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.302 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.302 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.302 "name": "Existed_Raid", 00:12:38.302 "uuid": "483c8498-0f9a-442c-bb88-a2803cfc2a2f", 00:12:38.302 "strip_size_kb": 64, 00:12:38.302 "state": "configuring", 00:12:38.302 "raid_level": "concat", 00:12:38.302 "superblock": true, 00:12:38.302 "num_base_bdevs": 4, 00:12:38.302 "num_base_bdevs_discovered": 2, 00:12:38.302 "num_base_bdevs_operational": 4, 00:12:38.302 "base_bdevs_list": [ 00:12:38.302 { 00:12:38.302 "name": "BaseBdev1", 00:12:38.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.302 "is_configured": false, 00:12:38.302 "data_offset": 0, 00:12:38.302 "data_size": 0 00:12:38.302 }, 00:12:38.302 { 00:12:38.302 "name": null, 00:12:38.302 "uuid": "05b0c780-efdc-4692-b0f5-50f1b179ef55", 00:12:38.302 "is_configured": false, 00:12:38.302 "data_offset": 0, 00:12:38.302 "data_size": 63488 00:12:38.302 }, 00:12:38.302 { 00:12:38.302 "name": "BaseBdev3", 00:12:38.302 "uuid": "965ccb9d-c145-4ff3-95b5-400382d24cb2", 00:12:38.302 "is_configured": true, 00:12:38.302 "data_offset": 2048, 00:12:38.302 "data_size": 63488 00:12:38.302 }, 00:12:38.302 { 00:12:38.302 "name": "BaseBdev4", 00:12:38.302 "uuid": "dd0a24b0-5a42-4d48-84af-44ecb223bc11", 00:12:38.302 "is_configured": true, 00:12:38.302 "data_offset": 2048, 00:12:38.302 "data_size": 63488 00:12:38.302 } 00:12:38.302 ] 00:12:38.302 }' 00:12:38.302 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.302 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.868 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.868 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:38.868 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.868 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.868 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.868 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:38.868 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:38.868 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.868 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.868 [2024-11-20 11:22:21.888301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.868 BaseBdev1 00:12:38.868 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.868 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:38.868 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:38.868 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:38.868 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:38.868 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:38.868 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:38.868 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:38.868 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.868 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.868 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.868 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:38.868 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.869 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.869 [ 00:12:38.869 { 00:12:38.869 "name": "BaseBdev1", 00:12:38.869 "aliases": [ 00:12:38.869 "61ba5e84-3238-4f5b-bf73-70418f2ac9f9" 00:12:38.869 ], 00:12:38.869 "product_name": "Malloc disk", 00:12:38.869 "block_size": 512, 00:12:38.869 "num_blocks": 65536, 00:12:38.869 "uuid": "61ba5e84-3238-4f5b-bf73-70418f2ac9f9", 00:12:38.869 "assigned_rate_limits": { 00:12:38.869 "rw_ios_per_sec": 0, 00:12:38.869 "rw_mbytes_per_sec": 0, 00:12:38.869 "r_mbytes_per_sec": 0, 00:12:38.869 "w_mbytes_per_sec": 0 00:12:38.869 }, 00:12:38.869 "claimed": true, 00:12:38.869 "claim_type": "exclusive_write", 00:12:38.869 "zoned": false, 00:12:38.869 "supported_io_types": { 00:12:38.869 "read": true, 00:12:38.869 "write": true, 00:12:38.869 "unmap": true, 00:12:38.869 "flush": true, 00:12:38.869 "reset": true, 00:12:38.869 "nvme_admin": false, 00:12:38.869 "nvme_io": false, 00:12:38.869 "nvme_io_md": false, 00:12:38.869 "write_zeroes": true, 00:12:38.869 "zcopy": true, 00:12:38.869 "get_zone_info": false, 00:12:38.869 "zone_management": false, 00:12:38.869 "zone_append": false, 00:12:38.869 "compare": false, 00:12:38.869 "compare_and_write": false, 00:12:38.869 "abort": true, 00:12:38.869 "seek_hole": false, 00:12:38.869 "seek_data": false, 00:12:38.869 "copy": true, 00:12:38.869 "nvme_iov_md": false 00:12:38.869 }, 00:12:38.869 "memory_domains": [ 00:12:38.869 { 00:12:38.869 "dma_device_id": "system", 00:12:38.869 "dma_device_type": 1 00:12:38.869 }, 00:12:38.869 { 00:12:38.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.869 "dma_device_type": 2 00:12:38.869 } 00:12:38.869 ], 00:12:38.869 "driver_specific": {} 00:12:38.869 } 00:12:38.869 ] 00:12:38.869 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.869 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:38.869 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:38.869 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.869 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.869 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:38.869 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.869 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.869 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.869 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.869 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.869 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.869 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.869 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.869 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.869 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.869 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.869 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.869 "name": "Existed_Raid", 00:12:38.869 "uuid": "483c8498-0f9a-442c-bb88-a2803cfc2a2f", 00:12:38.869 "strip_size_kb": 64, 00:12:38.869 "state": "configuring", 00:12:38.869 "raid_level": "concat", 00:12:38.869 "superblock": true, 00:12:38.869 "num_base_bdevs": 4, 00:12:38.869 "num_base_bdevs_discovered": 3, 00:12:38.869 "num_base_bdevs_operational": 4, 00:12:38.869 "base_bdevs_list": [ 00:12:38.869 { 00:12:38.869 "name": "BaseBdev1", 00:12:38.869 "uuid": "61ba5e84-3238-4f5b-bf73-70418f2ac9f9", 00:12:38.869 "is_configured": true, 00:12:38.869 "data_offset": 2048, 00:12:38.869 "data_size": 63488 00:12:38.869 }, 00:12:38.869 { 00:12:38.869 "name": null, 00:12:38.869 "uuid": "05b0c780-efdc-4692-b0f5-50f1b179ef55", 00:12:38.869 "is_configured": false, 00:12:38.869 "data_offset": 0, 00:12:38.869 "data_size": 63488 00:12:38.869 }, 00:12:38.869 { 00:12:38.869 "name": "BaseBdev3", 00:12:38.869 "uuid": "965ccb9d-c145-4ff3-95b5-400382d24cb2", 00:12:38.869 "is_configured": true, 00:12:38.869 "data_offset": 2048, 00:12:38.869 "data_size": 63488 00:12:38.869 }, 00:12:38.869 { 00:12:38.869 "name": "BaseBdev4", 00:12:38.869 "uuid": "dd0a24b0-5a42-4d48-84af-44ecb223bc11", 00:12:38.869 "is_configured": true, 00:12:38.869 "data_offset": 2048, 00:12:38.869 "data_size": 63488 00:12:38.869 } 00:12:38.869 ] 00:12:38.869 }' 00:12:38.869 11:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.128 11:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.386 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:39.386 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.386 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.386 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.386 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.386 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:39.386 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:39.386 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.386 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.386 [2024-11-20 11:22:22.447528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:39.386 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.386 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:39.386 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.386 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.386 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:39.386 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.386 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.386 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.386 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.386 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.386 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.386 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.386 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.386 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.386 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.386 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.644 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.644 "name": "Existed_Raid", 00:12:39.644 "uuid": "483c8498-0f9a-442c-bb88-a2803cfc2a2f", 00:12:39.644 "strip_size_kb": 64, 00:12:39.644 "state": "configuring", 00:12:39.644 "raid_level": "concat", 00:12:39.644 "superblock": true, 00:12:39.644 "num_base_bdevs": 4, 00:12:39.644 "num_base_bdevs_discovered": 2, 00:12:39.644 "num_base_bdevs_operational": 4, 00:12:39.644 "base_bdevs_list": [ 00:12:39.644 { 00:12:39.644 "name": "BaseBdev1", 00:12:39.644 "uuid": "61ba5e84-3238-4f5b-bf73-70418f2ac9f9", 00:12:39.644 "is_configured": true, 00:12:39.644 "data_offset": 2048, 00:12:39.644 "data_size": 63488 00:12:39.644 }, 00:12:39.644 { 00:12:39.644 "name": null, 00:12:39.644 "uuid": "05b0c780-efdc-4692-b0f5-50f1b179ef55", 00:12:39.644 "is_configured": false, 00:12:39.644 "data_offset": 0, 00:12:39.644 "data_size": 63488 00:12:39.644 }, 00:12:39.644 { 00:12:39.644 "name": null, 00:12:39.644 "uuid": "965ccb9d-c145-4ff3-95b5-400382d24cb2", 00:12:39.644 "is_configured": false, 00:12:39.644 "data_offset": 0, 00:12:39.644 "data_size": 63488 00:12:39.644 }, 00:12:39.644 { 00:12:39.644 "name": "BaseBdev4", 00:12:39.644 "uuid": "dd0a24b0-5a42-4d48-84af-44ecb223bc11", 00:12:39.644 "is_configured": true, 00:12:39.644 "data_offset": 2048, 00:12:39.644 "data_size": 63488 00:12:39.644 } 00:12:39.644 ] 00:12:39.644 }' 00:12:39.644 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.644 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.903 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.903 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.903 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.903 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:39.903 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.903 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:39.903 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:39.903 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.903 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.903 [2024-11-20 11:22:22.958610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:39.903 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.903 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:39.903 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.903 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.903 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:39.903 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.903 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.903 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.903 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.903 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.903 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.903 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.903 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.903 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.903 11:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.903 11:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.161 11:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.161 "name": "Existed_Raid", 00:12:40.161 "uuid": "483c8498-0f9a-442c-bb88-a2803cfc2a2f", 00:12:40.161 "strip_size_kb": 64, 00:12:40.161 "state": "configuring", 00:12:40.161 "raid_level": "concat", 00:12:40.161 "superblock": true, 00:12:40.161 "num_base_bdevs": 4, 00:12:40.161 "num_base_bdevs_discovered": 3, 00:12:40.161 "num_base_bdevs_operational": 4, 00:12:40.161 "base_bdevs_list": [ 00:12:40.161 { 00:12:40.161 "name": "BaseBdev1", 00:12:40.161 "uuid": "61ba5e84-3238-4f5b-bf73-70418f2ac9f9", 00:12:40.161 "is_configured": true, 00:12:40.161 "data_offset": 2048, 00:12:40.161 "data_size": 63488 00:12:40.161 }, 00:12:40.161 { 00:12:40.161 "name": null, 00:12:40.161 "uuid": "05b0c780-efdc-4692-b0f5-50f1b179ef55", 00:12:40.161 "is_configured": false, 00:12:40.161 "data_offset": 0, 00:12:40.161 "data_size": 63488 00:12:40.161 }, 00:12:40.161 { 00:12:40.161 "name": "BaseBdev3", 00:12:40.161 "uuid": "965ccb9d-c145-4ff3-95b5-400382d24cb2", 00:12:40.161 "is_configured": true, 00:12:40.161 "data_offset": 2048, 00:12:40.161 "data_size": 63488 00:12:40.161 }, 00:12:40.161 { 00:12:40.161 "name": "BaseBdev4", 00:12:40.161 "uuid": "dd0a24b0-5a42-4d48-84af-44ecb223bc11", 00:12:40.161 "is_configured": true, 00:12:40.161 "data_offset": 2048, 00:12:40.161 "data_size": 63488 00:12:40.161 } 00:12:40.161 ] 00:12:40.161 }' 00:12:40.161 11:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.161 11:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.420 11:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:40.420 11:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.420 11:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.420 11:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.420 11:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.420 11:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:40.420 11:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:40.420 11:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.420 11:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.420 [2024-11-20 11:22:23.501746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:40.679 11:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.679 11:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:40.679 11:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.679 11:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.679 11:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:40.679 11:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.679 11:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.679 11:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.679 11:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.679 11:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.679 11:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.679 11:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.679 11:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.679 11:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.679 11:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.679 11:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.679 11:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.679 "name": "Existed_Raid", 00:12:40.679 "uuid": "483c8498-0f9a-442c-bb88-a2803cfc2a2f", 00:12:40.679 "strip_size_kb": 64, 00:12:40.679 "state": "configuring", 00:12:40.679 "raid_level": "concat", 00:12:40.679 "superblock": true, 00:12:40.679 "num_base_bdevs": 4, 00:12:40.679 "num_base_bdevs_discovered": 2, 00:12:40.679 "num_base_bdevs_operational": 4, 00:12:40.679 "base_bdevs_list": [ 00:12:40.679 { 00:12:40.679 "name": null, 00:12:40.679 "uuid": "61ba5e84-3238-4f5b-bf73-70418f2ac9f9", 00:12:40.679 "is_configured": false, 00:12:40.679 "data_offset": 0, 00:12:40.679 "data_size": 63488 00:12:40.679 }, 00:12:40.679 { 00:12:40.679 "name": null, 00:12:40.679 "uuid": "05b0c780-efdc-4692-b0f5-50f1b179ef55", 00:12:40.679 "is_configured": false, 00:12:40.679 "data_offset": 0, 00:12:40.679 "data_size": 63488 00:12:40.679 }, 00:12:40.679 { 00:12:40.679 "name": "BaseBdev3", 00:12:40.679 "uuid": "965ccb9d-c145-4ff3-95b5-400382d24cb2", 00:12:40.679 "is_configured": true, 00:12:40.679 "data_offset": 2048, 00:12:40.679 "data_size": 63488 00:12:40.679 }, 00:12:40.679 { 00:12:40.679 "name": "BaseBdev4", 00:12:40.679 "uuid": "dd0a24b0-5a42-4d48-84af-44ecb223bc11", 00:12:40.679 "is_configured": true, 00:12:40.679 "data_offset": 2048, 00:12:40.679 "data_size": 63488 00:12:40.679 } 00:12:40.679 ] 00:12:40.679 }' 00:12:40.679 11:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.679 11:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.247 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.248 [2024-11-20 11:22:24.128628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.248 "name": "Existed_Raid", 00:12:41.248 "uuid": "483c8498-0f9a-442c-bb88-a2803cfc2a2f", 00:12:41.248 "strip_size_kb": 64, 00:12:41.248 "state": "configuring", 00:12:41.248 "raid_level": "concat", 00:12:41.248 "superblock": true, 00:12:41.248 "num_base_bdevs": 4, 00:12:41.248 "num_base_bdevs_discovered": 3, 00:12:41.248 "num_base_bdevs_operational": 4, 00:12:41.248 "base_bdevs_list": [ 00:12:41.248 { 00:12:41.248 "name": null, 00:12:41.248 "uuid": "61ba5e84-3238-4f5b-bf73-70418f2ac9f9", 00:12:41.248 "is_configured": false, 00:12:41.248 "data_offset": 0, 00:12:41.248 "data_size": 63488 00:12:41.248 }, 00:12:41.248 { 00:12:41.248 "name": "BaseBdev2", 00:12:41.248 "uuid": "05b0c780-efdc-4692-b0f5-50f1b179ef55", 00:12:41.248 "is_configured": true, 00:12:41.248 "data_offset": 2048, 00:12:41.248 "data_size": 63488 00:12:41.248 }, 00:12:41.248 { 00:12:41.248 "name": "BaseBdev3", 00:12:41.248 "uuid": "965ccb9d-c145-4ff3-95b5-400382d24cb2", 00:12:41.248 "is_configured": true, 00:12:41.248 "data_offset": 2048, 00:12:41.248 "data_size": 63488 00:12:41.248 }, 00:12:41.248 { 00:12:41.248 "name": "BaseBdev4", 00:12:41.248 "uuid": "dd0a24b0-5a42-4d48-84af-44ecb223bc11", 00:12:41.248 "is_configured": true, 00:12:41.248 "data_offset": 2048, 00:12:41.248 "data_size": 63488 00:12:41.248 } 00:12:41.248 ] 00:12:41.248 }' 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.248 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.506 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:41.506 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.506 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.506 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.506 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.764 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:41.764 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.764 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.764 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.764 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:41.764 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.764 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 61ba5e84-3238-4f5b-bf73-70418f2ac9f9 00:12:41.764 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.764 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.764 [2024-11-20 11:22:24.741637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:41.764 [2024-11-20 11:22:24.741885] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:41.764 [2024-11-20 11:22:24.741898] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:41.764 [2024-11-20 11:22:24.742163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:41.764 [2024-11-20 11:22:24.742320] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:41.764 NewBaseBdev 00:12:41.764 [2024-11-20 11:22:24.742345] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:41.764 [2024-11-20 11:22:24.742484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.765 [ 00:12:41.765 { 00:12:41.765 "name": "NewBaseBdev", 00:12:41.765 "aliases": [ 00:12:41.765 "61ba5e84-3238-4f5b-bf73-70418f2ac9f9" 00:12:41.765 ], 00:12:41.765 "product_name": "Malloc disk", 00:12:41.765 "block_size": 512, 00:12:41.765 "num_blocks": 65536, 00:12:41.765 "uuid": "61ba5e84-3238-4f5b-bf73-70418f2ac9f9", 00:12:41.765 "assigned_rate_limits": { 00:12:41.765 "rw_ios_per_sec": 0, 00:12:41.765 "rw_mbytes_per_sec": 0, 00:12:41.765 "r_mbytes_per_sec": 0, 00:12:41.765 "w_mbytes_per_sec": 0 00:12:41.765 }, 00:12:41.765 "claimed": true, 00:12:41.765 "claim_type": "exclusive_write", 00:12:41.765 "zoned": false, 00:12:41.765 "supported_io_types": { 00:12:41.765 "read": true, 00:12:41.765 "write": true, 00:12:41.765 "unmap": true, 00:12:41.765 "flush": true, 00:12:41.765 "reset": true, 00:12:41.765 "nvme_admin": false, 00:12:41.765 "nvme_io": false, 00:12:41.765 "nvme_io_md": false, 00:12:41.765 "write_zeroes": true, 00:12:41.765 "zcopy": true, 00:12:41.765 "get_zone_info": false, 00:12:41.765 "zone_management": false, 00:12:41.765 "zone_append": false, 00:12:41.765 "compare": false, 00:12:41.765 "compare_and_write": false, 00:12:41.765 "abort": true, 00:12:41.765 "seek_hole": false, 00:12:41.765 "seek_data": false, 00:12:41.765 "copy": true, 00:12:41.765 "nvme_iov_md": false 00:12:41.765 }, 00:12:41.765 "memory_domains": [ 00:12:41.765 { 00:12:41.765 "dma_device_id": "system", 00:12:41.765 "dma_device_type": 1 00:12:41.765 }, 00:12:41.765 { 00:12:41.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.765 "dma_device_type": 2 00:12:41.765 } 00:12:41.765 ], 00:12:41.765 "driver_specific": {} 00:12:41.765 } 00:12:41.765 ] 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.765 "name": "Existed_Raid", 00:12:41.765 "uuid": "483c8498-0f9a-442c-bb88-a2803cfc2a2f", 00:12:41.765 "strip_size_kb": 64, 00:12:41.765 "state": "online", 00:12:41.765 "raid_level": "concat", 00:12:41.765 "superblock": true, 00:12:41.765 "num_base_bdevs": 4, 00:12:41.765 "num_base_bdevs_discovered": 4, 00:12:41.765 "num_base_bdevs_operational": 4, 00:12:41.765 "base_bdevs_list": [ 00:12:41.765 { 00:12:41.765 "name": "NewBaseBdev", 00:12:41.765 "uuid": "61ba5e84-3238-4f5b-bf73-70418f2ac9f9", 00:12:41.765 "is_configured": true, 00:12:41.765 "data_offset": 2048, 00:12:41.765 "data_size": 63488 00:12:41.765 }, 00:12:41.765 { 00:12:41.765 "name": "BaseBdev2", 00:12:41.765 "uuid": "05b0c780-efdc-4692-b0f5-50f1b179ef55", 00:12:41.765 "is_configured": true, 00:12:41.765 "data_offset": 2048, 00:12:41.765 "data_size": 63488 00:12:41.765 }, 00:12:41.765 { 00:12:41.765 "name": "BaseBdev3", 00:12:41.765 "uuid": "965ccb9d-c145-4ff3-95b5-400382d24cb2", 00:12:41.765 "is_configured": true, 00:12:41.765 "data_offset": 2048, 00:12:41.765 "data_size": 63488 00:12:41.765 }, 00:12:41.765 { 00:12:41.765 "name": "BaseBdev4", 00:12:41.765 "uuid": "dd0a24b0-5a42-4d48-84af-44ecb223bc11", 00:12:41.765 "is_configured": true, 00:12:41.765 "data_offset": 2048, 00:12:41.765 "data_size": 63488 00:12:41.765 } 00:12:41.765 ] 00:12:41.765 }' 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.765 11:22:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.332 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:42.332 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:42.332 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:42.332 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:42.332 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:42.332 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:42.332 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:42.332 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:42.332 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.332 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.332 [2024-11-20 11:22:25.221237] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:42.332 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.332 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:42.332 "name": "Existed_Raid", 00:12:42.332 "aliases": [ 00:12:42.332 "483c8498-0f9a-442c-bb88-a2803cfc2a2f" 00:12:42.332 ], 00:12:42.332 "product_name": "Raid Volume", 00:12:42.332 "block_size": 512, 00:12:42.332 "num_blocks": 253952, 00:12:42.332 "uuid": "483c8498-0f9a-442c-bb88-a2803cfc2a2f", 00:12:42.332 "assigned_rate_limits": { 00:12:42.332 "rw_ios_per_sec": 0, 00:12:42.332 "rw_mbytes_per_sec": 0, 00:12:42.332 "r_mbytes_per_sec": 0, 00:12:42.332 "w_mbytes_per_sec": 0 00:12:42.332 }, 00:12:42.332 "claimed": false, 00:12:42.332 "zoned": false, 00:12:42.332 "supported_io_types": { 00:12:42.332 "read": true, 00:12:42.332 "write": true, 00:12:42.332 "unmap": true, 00:12:42.332 "flush": true, 00:12:42.332 "reset": true, 00:12:42.332 "nvme_admin": false, 00:12:42.332 "nvme_io": false, 00:12:42.332 "nvme_io_md": false, 00:12:42.332 "write_zeroes": true, 00:12:42.332 "zcopy": false, 00:12:42.332 "get_zone_info": false, 00:12:42.332 "zone_management": false, 00:12:42.332 "zone_append": false, 00:12:42.332 "compare": false, 00:12:42.332 "compare_and_write": false, 00:12:42.332 "abort": false, 00:12:42.332 "seek_hole": false, 00:12:42.332 "seek_data": false, 00:12:42.332 "copy": false, 00:12:42.332 "nvme_iov_md": false 00:12:42.332 }, 00:12:42.332 "memory_domains": [ 00:12:42.332 { 00:12:42.332 "dma_device_id": "system", 00:12:42.332 "dma_device_type": 1 00:12:42.332 }, 00:12:42.332 { 00:12:42.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.332 "dma_device_type": 2 00:12:42.332 }, 00:12:42.332 { 00:12:42.332 "dma_device_id": "system", 00:12:42.332 "dma_device_type": 1 00:12:42.332 }, 00:12:42.332 { 00:12:42.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.332 "dma_device_type": 2 00:12:42.332 }, 00:12:42.332 { 00:12:42.332 "dma_device_id": "system", 00:12:42.332 "dma_device_type": 1 00:12:42.332 }, 00:12:42.332 { 00:12:42.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.332 "dma_device_type": 2 00:12:42.332 }, 00:12:42.332 { 00:12:42.332 "dma_device_id": "system", 00:12:42.332 "dma_device_type": 1 00:12:42.332 }, 00:12:42.332 { 00:12:42.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.332 "dma_device_type": 2 00:12:42.332 } 00:12:42.332 ], 00:12:42.332 "driver_specific": { 00:12:42.332 "raid": { 00:12:42.332 "uuid": "483c8498-0f9a-442c-bb88-a2803cfc2a2f", 00:12:42.332 "strip_size_kb": 64, 00:12:42.332 "state": "online", 00:12:42.332 "raid_level": "concat", 00:12:42.332 "superblock": true, 00:12:42.332 "num_base_bdevs": 4, 00:12:42.332 "num_base_bdevs_discovered": 4, 00:12:42.332 "num_base_bdevs_operational": 4, 00:12:42.332 "base_bdevs_list": [ 00:12:42.332 { 00:12:42.332 "name": "NewBaseBdev", 00:12:42.332 "uuid": "61ba5e84-3238-4f5b-bf73-70418f2ac9f9", 00:12:42.332 "is_configured": true, 00:12:42.332 "data_offset": 2048, 00:12:42.332 "data_size": 63488 00:12:42.332 }, 00:12:42.332 { 00:12:42.332 "name": "BaseBdev2", 00:12:42.333 "uuid": "05b0c780-efdc-4692-b0f5-50f1b179ef55", 00:12:42.333 "is_configured": true, 00:12:42.333 "data_offset": 2048, 00:12:42.333 "data_size": 63488 00:12:42.333 }, 00:12:42.333 { 00:12:42.333 "name": "BaseBdev3", 00:12:42.333 "uuid": "965ccb9d-c145-4ff3-95b5-400382d24cb2", 00:12:42.333 "is_configured": true, 00:12:42.333 "data_offset": 2048, 00:12:42.333 "data_size": 63488 00:12:42.333 }, 00:12:42.333 { 00:12:42.333 "name": "BaseBdev4", 00:12:42.333 "uuid": "dd0a24b0-5a42-4d48-84af-44ecb223bc11", 00:12:42.333 "is_configured": true, 00:12:42.333 "data_offset": 2048, 00:12:42.333 "data_size": 63488 00:12:42.333 } 00:12:42.333 ] 00:12:42.333 } 00:12:42.333 } 00:12:42.333 }' 00:12:42.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:42.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:42.333 BaseBdev2 00:12:42.333 BaseBdev3 00:12:42.333 BaseBdev4' 00:12:42.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:42.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:42.333 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.333 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.333 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:42.333 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.333 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.333 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:42.333 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.333 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.333 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.590 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.590 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.590 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.590 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.590 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.590 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:42.590 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.590 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.590 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.590 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.590 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.590 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:42.590 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.590 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.590 [2024-11-20 11:22:25.540366] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:42.590 [2024-11-20 11:22:25.540462] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:42.590 [2024-11-20 11:22:25.540584] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:42.590 [2024-11-20 11:22:25.540692] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:42.590 [2024-11-20 11:22:25.540743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:42.590 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.590 11:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72104 00:12:42.590 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72104 ']' 00:12:42.591 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72104 00:12:42.591 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:42.591 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.591 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72104 00:12:42.591 killing process with pid 72104 00:12:42.591 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:42.591 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:42.591 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72104' 00:12:42.591 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72104 00:12:42.591 [2024-11-20 11:22:25.577305] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:42.591 11:22:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72104 00:12:43.157 [2024-11-20 11:22:25.983000] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:44.089 11:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:44.089 00:12:44.089 real 0m11.939s 00:12:44.089 user 0m19.075s 00:12:44.089 sys 0m2.077s 00:12:44.089 11:22:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.089 11:22:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.089 ************************************ 00:12:44.089 END TEST raid_state_function_test_sb 00:12:44.089 ************************************ 00:12:44.089 11:22:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:12:44.089 11:22:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:44.089 11:22:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.089 11:22:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:44.089 ************************************ 00:12:44.089 START TEST raid_superblock_test 00:12:44.089 ************************************ 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72769 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72769 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72769 ']' 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.089 11:22:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.365 [2024-11-20 11:22:27.255440] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:12:44.365 [2024-11-20 11:22:27.255647] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72769 ] 00:12:44.365 [2024-11-20 11:22:27.431445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.623 [2024-11-20 11:22:27.549834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.881 [2024-11-20 11:22:27.758967] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.881 [2024-11-20 11:22:27.759143] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.140 malloc1 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.140 [2024-11-20 11:22:28.215010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:45.140 [2024-11-20 11:22:28.215138] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.140 [2024-11-20 11:22:28.215181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:45.140 [2024-11-20 11:22:28.215219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.140 [2024-11-20 11:22:28.217355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.140 [2024-11-20 11:22:28.217394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:45.140 pt1 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.140 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.400 malloc2 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.400 [2024-11-20 11:22:28.271124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:45.400 [2024-11-20 11:22:28.271242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.400 [2024-11-20 11:22:28.271290] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:45.400 [2024-11-20 11:22:28.271325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.400 [2024-11-20 11:22:28.273663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.400 [2024-11-20 11:22:28.273740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:45.400 pt2 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.400 malloc3 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.400 [2024-11-20 11:22:28.347382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:45.400 [2024-11-20 11:22:28.347493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.400 [2024-11-20 11:22:28.347550] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:45.400 [2024-11-20 11:22:28.347597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.400 [2024-11-20 11:22:28.350105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.400 [2024-11-20 11:22:28.350185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:45.400 pt3 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.400 malloc4 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.400 [2024-11-20 11:22:28.404980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:45.400 [2024-11-20 11:22:28.405098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.400 [2024-11-20 11:22:28.405139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:45.400 [2024-11-20 11:22:28.405197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.400 [2024-11-20 11:22:28.407465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.400 [2024-11-20 11:22:28.407560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:45.400 pt4 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.400 [2024-11-20 11:22:28.416980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:45.400 [2024-11-20 11:22:28.419008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:45.400 [2024-11-20 11:22:28.419132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:45.400 [2024-11-20 11:22:28.419238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:45.400 [2024-11-20 11:22:28.419533] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:45.400 [2024-11-20 11:22:28.419588] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:45.400 [2024-11-20 11:22:28.419965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:45.400 [2024-11-20 11:22:28.420248] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:45.400 [2024-11-20 11:22:28.420306] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:45.400 [2024-11-20 11:22:28.420566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:45.400 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.401 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.401 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:45.401 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.401 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.401 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.401 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.401 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.401 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.401 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.401 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.401 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.401 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.401 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.401 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.401 "name": "raid_bdev1", 00:12:45.401 "uuid": "d388dbab-9622-4dfc-90af-8092a5508d93", 00:12:45.401 "strip_size_kb": 64, 00:12:45.401 "state": "online", 00:12:45.401 "raid_level": "concat", 00:12:45.401 "superblock": true, 00:12:45.401 "num_base_bdevs": 4, 00:12:45.401 "num_base_bdevs_discovered": 4, 00:12:45.401 "num_base_bdevs_operational": 4, 00:12:45.401 "base_bdevs_list": [ 00:12:45.401 { 00:12:45.401 "name": "pt1", 00:12:45.401 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:45.401 "is_configured": true, 00:12:45.401 "data_offset": 2048, 00:12:45.401 "data_size": 63488 00:12:45.401 }, 00:12:45.401 { 00:12:45.401 "name": "pt2", 00:12:45.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:45.401 "is_configured": true, 00:12:45.401 "data_offset": 2048, 00:12:45.401 "data_size": 63488 00:12:45.401 }, 00:12:45.401 { 00:12:45.401 "name": "pt3", 00:12:45.401 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:45.401 "is_configured": true, 00:12:45.401 "data_offset": 2048, 00:12:45.401 "data_size": 63488 00:12:45.401 }, 00:12:45.401 { 00:12:45.401 "name": "pt4", 00:12:45.401 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:45.401 "is_configured": true, 00:12:45.401 "data_offset": 2048, 00:12:45.401 "data_size": 63488 00:12:45.401 } 00:12:45.401 ] 00:12:45.401 }' 00:12:45.401 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.401 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.971 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:45.971 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:45.971 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:45.971 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:45.971 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:45.971 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:45.971 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:45.971 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:45.971 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.971 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.971 [2024-11-20 11:22:28.892547] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:45.971 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.971 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:45.971 "name": "raid_bdev1", 00:12:45.971 "aliases": [ 00:12:45.971 "d388dbab-9622-4dfc-90af-8092a5508d93" 00:12:45.971 ], 00:12:45.971 "product_name": "Raid Volume", 00:12:45.971 "block_size": 512, 00:12:45.971 "num_blocks": 253952, 00:12:45.971 "uuid": "d388dbab-9622-4dfc-90af-8092a5508d93", 00:12:45.971 "assigned_rate_limits": { 00:12:45.971 "rw_ios_per_sec": 0, 00:12:45.971 "rw_mbytes_per_sec": 0, 00:12:45.971 "r_mbytes_per_sec": 0, 00:12:45.971 "w_mbytes_per_sec": 0 00:12:45.971 }, 00:12:45.971 "claimed": false, 00:12:45.971 "zoned": false, 00:12:45.971 "supported_io_types": { 00:12:45.971 "read": true, 00:12:45.971 "write": true, 00:12:45.971 "unmap": true, 00:12:45.971 "flush": true, 00:12:45.971 "reset": true, 00:12:45.971 "nvme_admin": false, 00:12:45.971 "nvme_io": false, 00:12:45.971 "nvme_io_md": false, 00:12:45.971 "write_zeroes": true, 00:12:45.971 "zcopy": false, 00:12:45.971 "get_zone_info": false, 00:12:45.971 "zone_management": false, 00:12:45.971 "zone_append": false, 00:12:45.971 "compare": false, 00:12:45.971 "compare_and_write": false, 00:12:45.971 "abort": false, 00:12:45.971 "seek_hole": false, 00:12:45.971 "seek_data": false, 00:12:45.971 "copy": false, 00:12:45.971 "nvme_iov_md": false 00:12:45.971 }, 00:12:45.971 "memory_domains": [ 00:12:45.971 { 00:12:45.971 "dma_device_id": "system", 00:12:45.971 "dma_device_type": 1 00:12:45.971 }, 00:12:45.971 { 00:12:45.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.971 "dma_device_type": 2 00:12:45.971 }, 00:12:45.971 { 00:12:45.971 "dma_device_id": "system", 00:12:45.971 "dma_device_type": 1 00:12:45.971 }, 00:12:45.971 { 00:12:45.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.971 "dma_device_type": 2 00:12:45.971 }, 00:12:45.971 { 00:12:45.971 "dma_device_id": "system", 00:12:45.971 "dma_device_type": 1 00:12:45.971 }, 00:12:45.971 { 00:12:45.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.971 "dma_device_type": 2 00:12:45.971 }, 00:12:45.971 { 00:12:45.971 "dma_device_id": "system", 00:12:45.971 "dma_device_type": 1 00:12:45.971 }, 00:12:45.971 { 00:12:45.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.971 "dma_device_type": 2 00:12:45.971 } 00:12:45.971 ], 00:12:45.971 "driver_specific": { 00:12:45.971 "raid": { 00:12:45.971 "uuid": "d388dbab-9622-4dfc-90af-8092a5508d93", 00:12:45.972 "strip_size_kb": 64, 00:12:45.972 "state": "online", 00:12:45.972 "raid_level": "concat", 00:12:45.972 "superblock": true, 00:12:45.972 "num_base_bdevs": 4, 00:12:45.972 "num_base_bdevs_discovered": 4, 00:12:45.972 "num_base_bdevs_operational": 4, 00:12:45.972 "base_bdevs_list": [ 00:12:45.972 { 00:12:45.972 "name": "pt1", 00:12:45.972 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:45.972 "is_configured": true, 00:12:45.972 "data_offset": 2048, 00:12:45.972 "data_size": 63488 00:12:45.972 }, 00:12:45.972 { 00:12:45.972 "name": "pt2", 00:12:45.972 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:45.972 "is_configured": true, 00:12:45.972 "data_offset": 2048, 00:12:45.972 "data_size": 63488 00:12:45.972 }, 00:12:45.972 { 00:12:45.972 "name": "pt3", 00:12:45.972 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:45.972 "is_configured": true, 00:12:45.972 "data_offset": 2048, 00:12:45.972 "data_size": 63488 00:12:45.972 }, 00:12:45.972 { 00:12:45.972 "name": "pt4", 00:12:45.972 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:45.972 "is_configured": true, 00:12:45.972 "data_offset": 2048, 00:12:45.972 "data_size": 63488 00:12:45.972 } 00:12:45.972 ] 00:12:45.972 } 00:12:45.972 } 00:12:45.972 }' 00:12:45.972 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:45.972 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:45.972 pt2 00:12:45.972 pt3 00:12:45.972 pt4' 00:12:45.972 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.972 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:45.972 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.972 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:45.972 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.972 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.972 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.972 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.972 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.972 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.972 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.972 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.972 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:45.972 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.972 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:46.231 [2024-11-20 11:22:29.228022] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d388dbab-9622-4dfc-90af-8092a5508d93 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d388dbab-9622-4dfc-90af-8092a5508d93 ']' 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.231 [2024-11-20 11:22:29.275608] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:46.231 [2024-11-20 11:22:29.275638] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:46.231 [2024-11-20 11:22:29.275731] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.231 [2024-11-20 11:22:29.275808] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:46.231 [2024-11-20 11:22:29.275824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.231 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.491 [2024-11-20 11:22:29.439375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:46.491 [2024-11-20 11:22:29.441597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:46.491 [2024-11-20 11:22:29.441707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:46.491 [2024-11-20 11:22:29.441771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:46.491 [2024-11-20 11:22:29.441856] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:46.491 [2024-11-20 11:22:29.441960] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:46.491 [2024-11-20 11:22:29.441986] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:46.491 [2024-11-20 11:22:29.442009] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:46.491 [2024-11-20 11:22:29.442024] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:46.491 [2024-11-20 11:22:29.442038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:46.491 request: 00:12:46.491 { 00:12:46.491 "name": "raid_bdev1", 00:12:46.491 "raid_level": "concat", 00:12:46.491 "base_bdevs": [ 00:12:46.491 "malloc1", 00:12:46.491 "malloc2", 00:12:46.491 "malloc3", 00:12:46.491 "malloc4" 00:12:46.491 ], 00:12:46.491 "strip_size_kb": 64, 00:12:46.491 "superblock": false, 00:12:46.491 "method": "bdev_raid_create", 00:12:46.491 "req_id": 1 00:12:46.491 } 00:12:46.491 Got JSON-RPC error response 00:12:46.491 response: 00:12:46.491 { 00:12:46.491 "code": -17, 00:12:46.491 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:46.491 } 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.491 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.491 [2024-11-20 11:22:29.507203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:46.491 [2024-11-20 11:22:29.507275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.491 [2024-11-20 11:22:29.507294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:46.491 [2024-11-20 11:22:29.507305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.491 [2024-11-20 11:22:29.509627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.492 [2024-11-20 11:22:29.509672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:46.492 [2024-11-20 11:22:29.509760] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:46.492 [2024-11-20 11:22:29.509825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:46.492 pt1 00:12:46.492 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.492 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:46.492 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.492 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.492 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:46.492 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.492 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:46.492 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.492 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.492 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.492 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.492 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.492 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.492 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.492 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.492 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.492 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.492 "name": "raid_bdev1", 00:12:46.492 "uuid": "d388dbab-9622-4dfc-90af-8092a5508d93", 00:12:46.492 "strip_size_kb": 64, 00:12:46.492 "state": "configuring", 00:12:46.492 "raid_level": "concat", 00:12:46.492 "superblock": true, 00:12:46.492 "num_base_bdevs": 4, 00:12:46.492 "num_base_bdevs_discovered": 1, 00:12:46.492 "num_base_bdevs_operational": 4, 00:12:46.492 "base_bdevs_list": [ 00:12:46.492 { 00:12:46.492 "name": "pt1", 00:12:46.492 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:46.492 "is_configured": true, 00:12:46.492 "data_offset": 2048, 00:12:46.492 "data_size": 63488 00:12:46.492 }, 00:12:46.492 { 00:12:46.492 "name": null, 00:12:46.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:46.492 "is_configured": false, 00:12:46.492 "data_offset": 2048, 00:12:46.492 "data_size": 63488 00:12:46.492 }, 00:12:46.492 { 00:12:46.492 "name": null, 00:12:46.492 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:46.492 "is_configured": false, 00:12:46.492 "data_offset": 2048, 00:12:46.492 "data_size": 63488 00:12:46.492 }, 00:12:46.492 { 00:12:46.492 "name": null, 00:12:46.492 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:46.492 "is_configured": false, 00:12:46.492 "data_offset": 2048, 00:12:46.492 "data_size": 63488 00:12:46.492 } 00:12:46.492 ] 00:12:46.492 }' 00:12:46.492 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.492 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.060 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:47.060 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:47.060 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.060 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.060 [2024-11-20 11:22:29.934511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:47.060 [2024-11-20 11:22:29.934658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.060 [2024-11-20 11:22:29.934708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:47.060 [2024-11-20 11:22:29.934749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.060 [2024-11-20 11:22:29.935229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.060 [2024-11-20 11:22:29.935290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:47.060 [2024-11-20 11:22:29.935405] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:47.060 [2024-11-20 11:22:29.935470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:47.060 pt2 00:12:47.060 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.060 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:47.060 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.060 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.060 [2024-11-20 11:22:29.946491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:47.060 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.060 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:47.060 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.060 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.060 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:47.060 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.060 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.060 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.060 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.060 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.060 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.060 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.060 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.060 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.060 11:22:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.060 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.060 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.060 "name": "raid_bdev1", 00:12:47.060 "uuid": "d388dbab-9622-4dfc-90af-8092a5508d93", 00:12:47.060 "strip_size_kb": 64, 00:12:47.060 "state": "configuring", 00:12:47.060 "raid_level": "concat", 00:12:47.060 "superblock": true, 00:12:47.060 "num_base_bdevs": 4, 00:12:47.060 "num_base_bdevs_discovered": 1, 00:12:47.060 "num_base_bdevs_operational": 4, 00:12:47.060 "base_bdevs_list": [ 00:12:47.060 { 00:12:47.060 "name": "pt1", 00:12:47.060 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:47.060 "is_configured": true, 00:12:47.061 "data_offset": 2048, 00:12:47.061 "data_size": 63488 00:12:47.061 }, 00:12:47.061 { 00:12:47.061 "name": null, 00:12:47.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:47.061 "is_configured": false, 00:12:47.061 "data_offset": 0, 00:12:47.061 "data_size": 63488 00:12:47.061 }, 00:12:47.061 { 00:12:47.061 "name": null, 00:12:47.061 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:47.061 "is_configured": false, 00:12:47.061 "data_offset": 2048, 00:12:47.061 "data_size": 63488 00:12:47.061 }, 00:12:47.061 { 00:12:47.061 "name": null, 00:12:47.061 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:47.061 "is_configured": false, 00:12:47.061 "data_offset": 2048, 00:12:47.061 "data_size": 63488 00:12:47.061 } 00:12:47.061 ] 00:12:47.061 }' 00:12:47.061 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.061 11:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.321 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:47.321 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:47.321 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:47.321 11:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.321 11:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.321 [2024-11-20 11:22:30.389692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:47.321 [2024-11-20 11:22:30.389760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.321 [2024-11-20 11:22:30.389779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:47.321 [2024-11-20 11:22:30.389788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.321 [2024-11-20 11:22:30.390223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.321 [2024-11-20 11:22:30.390264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:47.321 [2024-11-20 11:22:30.390350] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:47.321 [2024-11-20 11:22:30.390378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:47.321 pt2 00:12:47.321 11:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.321 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:47.321 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:47.321 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:47.321 11:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.321 11:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.321 [2024-11-20 11:22:30.401657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:47.321 [2024-11-20 11:22:30.401780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.321 [2024-11-20 11:22:30.401807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:47.321 [2024-11-20 11:22:30.401818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.321 [2024-11-20 11:22:30.402225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.321 [2024-11-20 11:22:30.402240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:47.321 [2024-11-20 11:22:30.402308] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:47.321 [2024-11-20 11:22:30.402327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:47.321 pt3 00:12:47.321 11:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.322 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:47.322 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:47.322 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:47.322 11:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.322 11:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.322 [2024-11-20 11:22:30.413607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:47.322 [2024-11-20 11:22:30.413662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.322 [2024-11-20 11:22:30.413683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:47.322 [2024-11-20 11:22:30.413690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.322 [2024-11-20 11:22:30.414083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.322 [2024-11-20 11:22:30.414097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:47.322 [2024-11-20 11:22:30.414169] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:47.322 [2024-11-20 11:22:30.414189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:47.322 [2024-11-20 11:22:30.414324] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:47.322 [2024-11-20 11:22:30.414332] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:47.322 [2024-11-20 11:22:30.414582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:47.322 [2024-11-20 11:22:30.414763] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:47.322 [2024-11-20 11:22:30.414781] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:47.322 [2024-11-20 11:22:30.414918] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.322 pt4 00:12:47.322 11:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.322 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:47.322 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:47.322 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:47.322 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.322 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.322 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:47.322 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.322 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.322 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.322 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.322 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.322 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.322 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.322 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.322 11:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.322 11:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.582 11:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.582 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.582 "name": "raid_bdev1", 00:12:47.582 "uuid": "d388dbab-9622-4dfc-90af-8092a5508d93", 00:12:47.582 "strip_size_kb": 64, 00:12:47.582 "state": "online", 00:12:47.582 "raid_level": "concat", 00:12:47.582 "superblock": true, 00:12:47.582 "num_base_bdevs": 4, 00:12:47.582 "num_base_bdevs_discovered": 4, 00:12:47.582 "num_base_bdevs_operational": 4, 00:12:47.582 "base_bdevs_list": [ 00:12:47.582 { 00:12:47.582 "name": "pt1", 00:12:47.582 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:47.582 "is_configured": true, 00:12:47.582 "data_offset": 2048, 00:12:47.582 "data_size": 63488 00:12:47.582 }, 00:12:47.582 { 00:12:47.582 "name": "pt2", 00:12:47.582 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:47.582 "is_configured": true, 00:12:47.582 "data_offset": 2048, 00:12:47.582 "data_size": 63488 00:12:47.582 }, 00:12:47.582 { 00:12:47.582 "name": "pt3", 00:12:47.582 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:47.582 "is_configured": true, 00:12:47.582 "data_offset": 2048, 00:12:47.582 "data_size": 63488 00:12:47.582 }, 00:12:47.582 { 00:12:47.582 "name": "pt4", 00:12:47.582 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:47.582 "is_configured": true, 00:12:47.582 "data_offset": 2048, 00:12:47.582 "data_size": 63488 00:12:47.582 } 00:12:47.582 ] 00:12:47.582 }' 00:12:47.582 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.582 11:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.843 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:47.843 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:47.843 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:47.843 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:47.843 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:47.843 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:47.843 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:47.843 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:47.843 11:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.843 11:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.843 [2024-11-20 11:22:30.833288] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:47.843 11:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.843 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:47.843 "name": "raid_bdev1", 00:12:47.843 "aliases": [ 00:12:47.843 "d388dbab-9622-4dfc-90af-8092a5508d93" 00:12:47.843 ], 00:12:47.843 "product_name": "Raid Volume", 00:12:47.843 "block_size": 512, 00:12:47.843 "num_blocks": 253952, 00:12:47.843 "uuid": "d388dbab-9622-4dfc-90af-8092a5508d93", 00:12:47.843 "assigned_rate_limits": { 00:12:47.843 "rw_ios_per_sec": 0, 00:12:47.843 "rw_mbytes_per_sec": 0, 00:12:47.843 "r_mbytes_per_sec": 0, 00:12:47.843 "w_mbytes_per_sec": 0 00:12:47.843 }, 00:12:47.843 "claimed": false, 00:12:47.843 "zoned": false, 00:12:47.843 "supported_io_types": { 00:12:47.843 "read": true, 00:12:47.843 "write": true, 00:12:47.843 "unmap": true, 00:12:47.843 "flush": true, 00:12:47.843 "reset": true, 00:12:47.843 "nvme_admin": false, 00:12:47.843 "nvme_io": false, 00:12:47.843 "nvme_io_md": false, 00:12:47.843 "write_zeroes": true, 00:12:47.843 "zcopy": false, 00:12:47.843 "get_zone_info": false, 00:12:47.843 "zone_management": false, 00:12:47.843 "zone_append": false, 00:12:47.843 "compare": false, 00:12:47.843 "compare_and_write": false, 00:12:47.843 "abort": false, 00:12:47.843 "seek_hole": false, 00:12:47.843 "seek_data": false, 00:12:47.843 "copy": false, 00:12:47.843 "nvme_iov_md": false 00:12:47.843 }, 00:12:47.843 "memory_domains": [ 00:12:47.843 { 00:12:47.843 "dma_device_id": "system", 00:12:47.843 "dma_device_type": 1 00:12:47.843 }, 00:12:47.843 { 00:12:47.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.843 "dma_device_type": 2 00:12:47.843 }, 00:12:47.843 { 00:12:47.843 "dma_device_id": "system", 00:12:47.843 "dma_device_type": 1 00:12:47.843 }, 00:12:47.843 { 00:12:47.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.843 "dma_device_type": 2 00:12:47.843 }, 00:12:47.843 { 00:12:47.843 "dma_device_id": "system", 00:12:47.843 "dma_device_type": 1 00:12:47.843 }, 00:12:47.843 { 00:12:47.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.843 "dma_device_type": 2 00:12:47.843 }, 00:12:47.843 { 00:12:47.843 "dma_device_id": "system", 00:12:47.843 "dma_device_type": 1 00:12:47.843 }, 00:12:47.843 { 00:12:47.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.843 "dma_device_type": 2 00:12:47.843 } 00:12:47.843 ], 00:12:47.843 "driver_specific": { 00:12:47.843 "raid": { 00:12:47.843 "uuid": "d388dbab-9622-4dfc-90af-8092a5508d93", 00:12:47.843 "strip_size_kb": 64, 00:12:47.843 "state": "online", 00:12:47.843 "raid_level": "concat", 00:12:47.843 "superblock": true, 00:12:47.843 "num_base_bdevs": 4, 00:12:47.843 "num_base_bdevs_discovered": 4, 00:12:47.843 "num_base_bdevs_operational": 4, 00:12:47.843 "base_bdevs_list": [ 00:12:47.843 { 00:12:47.843 "name": "pt1", 00:12:47.843 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:47.843 "is_configured": true, 00:12:47.843 "data_offset": 2048, 00:12:47.843 "data_size": 63488 00:12:47.843 }, 00:12:47.843 { 00:12:47.843 "name": "pt2", 00:12:47.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:47.843 "is_configured": true, 00:12:47.843 "data_offset": 2048, 00:12:47.843 "data_size": 63488 00:12:47.843 }, 00:12:47.843 { 00:12:47.843 "name": "pt3", 00:12:47.843 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:47.843 "is_configured": true, 00:12:47.843 "data_offset": 2048, 00:12:47.843 "data_size": 63488 00:12:47.843 }, 00:12:47.843 { 00:12:47.843 "name": "pt4", 00:12:47.843 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:47.843 "is_configured": true, 00:12:47.843 "data_offset": 2048, 00:12:47.843 "data_size": 63488 00:12:47.843 } 00:12:47.843 ] 00:12:47.843 } 00:12:47.843 } 00:12:47.843 }' 00:12:47.843 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:47.843 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:47.843 pt2 00:12:47.843 pt3 00:12:47.843 pt4' 00:12:47.843 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.103 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:48.103 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.103 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:48.103 11:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.103 11:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.103 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.103 11:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.103 11:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.104 [2024-11-20 11:22:31.152776] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:48.104 11:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.104 11:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d388dbab-9622-4dfc-90af-8092a5508d93 '!=' d388dbab-9622-4dfc-90af-8092a5508d93 ']' 00:12:48.104 11:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:48.104 11:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:48.104 11:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:48.104 11:22:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72769 00:12:48.104 11:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72769 ']' 00:12:48.104 11:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72769 00:12:48.104 11:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:48.104 11:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:48.104 11:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72769 00:12:48.364 killing process with pid 72769 00:12:48.364 11:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:48.364 11:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:48.364 11:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72769' 00:12:48.364 11:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72769 00:12:48.364 [2024-11-20 11:22:31.221374] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:48.364 [2024-11-20 11:22:31.221475] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:48.364 [2024-11-20 11:22:31.221549] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:48.364 [2024-11-20 11:22:31.221559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:48.364 11:22:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72769 00:12:48.624 [2024-11-20 11:22:31.636626] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:50.023 11:22:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:50.023 00:12:50.023 real 0m5.607s 00:12:50.023 user 0m8.003s 00:12:50.023 sys 0m0.969s 00:12:50.023 ************************************ 00:12:50.023 END TEST raid_superblock_test 00:12:50.023 ************************************ 00:12:50.023 11:22:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.023 11:22:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.023 11:22:32 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:12:50.023 11:22:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:50.023 11:22:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.023 11:22:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:50.023 ************************************ 00:12:50.023 START TEST raid_read_error_test 00:12:50.023 ************************************ 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ILKhYLV0uo 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73039 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73039 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73039 ']' 00:12:50.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.023 11:22:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.023 [2024-11-20 11:22:32.948612] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:12:50.023 [2024-11-20 11:22:32.948833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73039 ] 00:12:50.024 [2024-11-20 11:22:33.125367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.282 [2024-11-20 11:22:33.243300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.540 [2024-11-20 11:22:33.450787] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:50.540 [2024-11-20 11:22:33.450937] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:50.799 11:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:50.799 11:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:50.799 11:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:50.799 11:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:50.799 11:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.799 11:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.799 BaseBdev1_malloc 00:12:50.799 11:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.799 11:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:50.799 11:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.799 11:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.799 true 00:12:50.799 11:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.799 11:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:50.799 11:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.799 11:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.799 [2024-11-20 11:22:33.868442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:50.799 [2024-11-20 11:22:33.868521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.799 [2024-11-20 11:22:33.868548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:50.799 [2024-11-20 11:22:33.868563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.799 [2024-11-20 11:22:33.871019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.799 [2024-11-20 11:22:33.871133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:50.799 BaseBdev1 00:12:50.799 11:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.799 11:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:50.799 11:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:50.799 11:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.799 11:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.059 BaseBdev2_malloc 00:12:51.059 11:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.059 11:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:51.059 11:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.059 11:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.059 true 00:12:51.059 11:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.059 11:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:51.059 11:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.059 11:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.059 [2024-11-20 11:22:33.935669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:51.059 [2024-11-20 11:22:33.935770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.059 [2024-11-20 11:22:33.935806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:51.059 [2024-11-20 11:22:33.935836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.059 [2024-11-20 11:22:33.937968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.059 [2024-11-20 11:22:33.938059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:51.059 BaseBdev2 00:12:51.059 11:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.059 11:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:51.059 11:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:51.059 11:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.059 11:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.059 BaseBdev3_malloc 00:12:51.059 11:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.059 11:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:51.059 11:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.059 11:22:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.059 true 00:12:51.059 11:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.059 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:51.059 11:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.059 11:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.059 [2024-11-20 11:22:34.017276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:51.059 [2024-11-20 11:22:34.017335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.059 [2024-11-20 11:22:34.017356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:51.059 [2024-11-20 11:22:34.017368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.059 [2024-11-20 11:22:34.019804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.059 [2024-11-20 11:22:34.019898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:51.059 BaseBdev3 00:12:51.059 11:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.059 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:51.059 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:51.059 11:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.059 11:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.059 BaseBdev4_malloc 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.060 true 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.060 [2024-11-20 11:22:34.086225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:51.060 [2024-11-20 11:22:34.086293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.060 [2024-11-20 11:22:34.086314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:51.060 [2024-11-20 11:22:34.086326] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.060 [2024-11-20 11:22:34.088789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.060 [2024-11-20 11:22:34.088897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:51.060 BaseBdev4 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.060 [2024-11-20 11:22:34.098297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:51.060 [2024-11-20 11:22:34.100327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:51.060 [2024-11-20 11:22:34.100475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:51.060 [2024-11-20 11:22:34.100552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:51.060 [2024-11-20 11:22:34.100826] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:51.060 [2024-11-20 11:22:34.100842] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:51.060 [2024-11-20 11:22:34.101115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:51.060 [2024-11-20 11:22:34.101288] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:51.060 [2024-11-20 11:22:34.101299] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:51.060 [2024-11-20 11:22:34.101503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.060 "name": "raid_bdev1", 00:12:51.060 "uuid": "6c090508-8c2f-4718-b7a1-e31e42cd8c7f", 00:12:51.060 "strip_size_kb": 64, 00:12:51.060 "state": "online", 00:12:51.060 "raid_level": "concat", 00:12:51.060 "superblock": true, 00:12:51.060 "num_base_bdevs": 4, 00:12:51.060 "num_base_bdevs_discovered": 4, 00:12:51.060 "num_base_bdevs_operational": 4, 00:12:51.060 "base_bdevs_list": [ 00:12:51.060 { 00:12:51.060 "name": "BaseBdev1", 00:12:51.060 "uuid": "9beb9eda-e518-5cd6-a990-699749122728", 00:12:51.060 "is_configured": true, 00:12:51.060 "data_offset": 2048, 00:12:51.060 "data_size": 63488 00:12:51.060 }, 00:12:51.060 { 00:12:51.060 "name": "BaseBdev2", 00:12:51.060 "uuid": "e988bcb3-4808-57d0-8e12-74575650e208", 00:12:51.060 "is_configured": true, 00:12:51.060 "data_offset": 2048, 00:12:51.060 "data_size": 63488 00:12:51.060 }, 00:12:51.060 { 00:12:51.060 "name": "BaseBdev3", 00:12:51.060 "uuid": "2c73eba4-d846-5728-bf32-2d94c7c30225", 00:12:51.060 "is_configured": true, 00:12:51.060 "data_offset": 2048, 00:12:51.060 "data_size": 63488 00:12:51.060 }, 00:12:51.060 { 00:12:51.060 "name": "BaseBdev4", 00:12:51.060 "uuid": "22400b7c-c3a2-5138-b96e-f260952372e6", 00:12:51.060 "is_configured": true, 00:12:51.060 "data_offset": 2048, 00:12:51.060 "data_size": 63488 00:12:51.060 } 00:12:51.060 ] 00:12:51.060 }' 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.060 11:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.625 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:51.625 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:51.625 [2024-11-20 11:22:34.666502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:52.560 11:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:52.560 11:22:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.560 11:22:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.560 11:22:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.560 11:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:52.560 11:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:52.560 11:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:52.560 11:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:52.560 11:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.560 11:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.560 11:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:52.560 11:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.560 11:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:52.560 11:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.560 11:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.560 11:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.560 11:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.560 11:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.561 11:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.561 11:22:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.561 11:22:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.561 11:22:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.561 11:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.561 "name": "raid_bdev1", 00:12:52.561 "uuid": "6c090508-8c2f-4718-b7a1-e31e42cd8c7f", 00:12:52.561 "strip_size_kb": 64, 00:12:52.561 "state": "online", 00:12:52.561 "raid_level": "concat", 00:12:52.561 "superblock": true, 00:12:52.561 "num_base_bdevs": 4, 00:12:52.561 "num_base_bdevs_discovered": 4, 00:12:52.561 "num_base_bdevs_operational": 4, 00:12:52.561 "base_bdevs_list": [ 00:12:52.561 { 00:12:52.561 "name": "BaseBdev1", 00:12:52.561 "uuid": "9beb9eda-e518-5cd6-a990-699749122728", 00:12:52.561 "is_configured": true, 00:12:52.561 "data_offset": 2048, 00:12:52.561 "data_size": 63488 00:12:52.561 }, 00:12:52.561 { 00:12:52.561 "name": "BaseBdev2", 00:12:52.561 "uuid": "e988bcb3-4808-57d0-8e12-74575650e208", 00:12:52.561 "is_configured": true, 00:12:52.561 "data_offset": 2048, 00:12:52.561 "data_size": 63488 00:12:52.561 }, 00:12:52.561 { 00:12:52.561 "name": "BaseBdev3", 00:12:52.561 "uuid": "2c73eba4-d846-5728-bf32-2d94c7c30225", 00:12:52.561 "is_configured": true, 00:12:52.561 "data_offset": 2048, 00:12:52.561 "data_size": 63488 00:12:52.561 }, 00:12:52.561 { 00:12:52.561 "name": "BaseBdev4", 00:12:52.561 "uuid": "22400b7c-c3a2-5138-b96e-f260952372e6", 00:12:52.561 "is_configured": true, 00:12:52.561 "data_offset": 2048, 00:12:52.561 "data_size": 63488 00:12:52.561 } 00:12:52.561 ] 00:12:52.561 }' 00:12:52.561 11:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.561 11:22:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.128 11:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:53.128 11:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.128 11:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.128 [2024-11-20 11:22:36.042815] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:53.128 [2024-11-20 11:22:36.042910] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:53.128 [2024-11-20 11:22:36.045862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:53.128 [2024-11-20 11:22:36.045923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.128 [2024-11-20 11:22:36.045966] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:53.128 [2024-11-20 11:22:36.045981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:53.128 { 00:12:53.128 "results": [ 00:12:53.128 { 00:12:53.128 "job": "raid_bdev1", 00:12:53.128 "core_mask": "0x1", 00:12:53.128 "workload": "randrw", 00:12:53.128 "percentage": 50, 00:12:53.128 "status": "finished", 00:12:53.128 "queue_depth": 1, 00:12:53.128 "io_size": 131072, 00:12:53.128 "runtime": 1.37708, 00:12:53.128 "iops": 14719.55151479943, 00:12:53.128 "mibps": 1839.9439393499288, 00:12:53.128 "io_failed": 1, 00:12:53.128 "io_timeout": 0, 00:12:53.128 "avg_latency_us": 94.4724750805623, 00:12:53.128 "min_latency_us": 26.270742358078603, 00:12:53.128 "max_latency_us": 1631.2454148471616 00:12:53.128 } 00:12:53.128 ], 00:12:53.128 "core_count": 1 00:12:53.128 } 00:12:53.128 11:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.128 11:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73039 00:12:53.128 11:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73039 ']' 00:12:53.128 11:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73039 00:12:53.128 11:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:53.128 11:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:53.128 11:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73039 00:12:53.128 11:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:53.128 11:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:53.128 11:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73039' 00:12:53.128 killing process with pid 73039 00:12:53.128 11:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73039 00:12:53.128 [2024-11-20 11:22:36.094246] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:53.128 11:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73039 00:12:53.387 [2024-11-20 11:22:36.434965] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:54.765 11:22:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ILKhYLV0uo 00:12:54.765 11:22:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:54.765 11:22:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:54.765 11:22:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:12:54.765 11:22:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:54.765 ************************************ 00:12:54.765 END TEST raid_read_error_test 00:12:54.765 ************************************ 00:12:54.765 11:22:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:54.765 11:22:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:54.765 11:22:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:12:54.765 00:12:54.765 real 0m4.796s 00:12:54.765 user 0m5.691s 00:12:54.765 sys 0m0.581s 00:12:54.765 11:22:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.765 11:22:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.765 11:22:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:12:54.765 11:22:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:54.765 11:22:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.765 11:22:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:54.765 ************************************ 00:12:54.765 START TEST raid_write_error_test 00:12:54.765 ************************************ 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2uDJnogayW 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73184 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73184 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73184 ']' 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.765 11:22:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.765 [2024-11-20 11:22:37.810339] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:12:54.765 [2024-11-20 11:22:37.810471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73184 ] 00:12:55.088 [2024-11-20 11:22:37.985042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.088 [2024-11-20 11:22:38.106019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.345 [2024-11-20 11:22:38.298684] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.345 [2024-11-20 11:22:38.298773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.602 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.602 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:55.602 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:55.602 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:55.602 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.602 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.602 BaseBdev1_malloc 00:12:55.602 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.602 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:55.602 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.602 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.860 true 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.860 [2024-11-20 11:22:38.725185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:55.860 [2024-11-20 11:22:38.725298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.860 [2024-11-20 11:22:38.725342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:55.860 [2024-11-20 11:22:38.725378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.860 [2024-11-20 11:22:38.727610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.860 [2024-11-20 11:22:38.727710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:55.860 BaseBdev1 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.860 BaseBdev2_malloc 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.860 true 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.860 [2024-11-20 11:22:38.796717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:55.860 [2024-11-20 11:22:38.796781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.860 [2024-11-20 11:22:38.796802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:55.860 [2024-11-20 11:22:38.796813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.860 [2024-11-20 11:22:38.798955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.860 [2024-11-20 11:22:38.799107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:55.860 BaseBdev2 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.860 BaseBdev3_malloc 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.860 true 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.860 [2024-11-20 11:22:38.875555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:55.860 [2024-11-20 11:22:38.875702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.860 [2024-11-20 11:22:38.875728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:55.860 [2024-11-20 11:22:38.875739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.860 [2024-11-20 11:22:38.878017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.860 [2024-11-20 11:22:38.878084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:55.860 BaseBdev3 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.860 BaseBdev4_malloc 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.860 true 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.860 [2024-11-20 11:22:38.943017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:55.860 [2024-11-20 11:22:38.943117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.860 [2024-11-20 11:22:38.943138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:55.860 [2024-11-20 11:22:38.943149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.860 [2024-11-20 11:22:38.945255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.860 [2024-11-20 11:22:38.945296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:55.860 BaseBdev4 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.860 [2024-11-20 11:22:38.955051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:55.860 [2024-11-20 11:22:38.956951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:55.860 [2024-11-20 11:22:38.957028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:55.860 [2024-11-20 11:22:38.957096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:55.860 [2024-11-20 11:22:38.957315] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:55.860 [2024-11-20 11:22:38.957341] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:55.860 [2024-11-20 11:22:38.957611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:55.860 [2024-11-20 11:22:38.957775] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:55.860 [2024-11-20 11:22:38.957822] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:55.860 [2024-11-20 11:22:38.957984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.860 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.861 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:55.861 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.861 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.861 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:55.861 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.861 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:55.861 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.861 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.861 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.861 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.861 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.861 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.861 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.861 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.119 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.119 11:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.119 "name": "raid_bdev1", 00:12:56.119 "uuid": "dcd41bf2-f230-4f82-b3ae-229d302a96ea", 00:12:56.119 "strip_size_kb": 64, 00:12:56.119 "state": "online", 00:12:56.119 "raid_level": "concat", 00:12:56.119 "superblock": true, 00:12:56.119 "num_base_bdevs": 4, 00:12:56.119 "num_base_bdevs_discovered": 4, 00:12:56.119 "num_base_bdevs_operational": 4, 00:12:56.119 "base_bdevs_list": [ 00:12:56.119 { 00:12:56.119 "name": "BaseBdev1", 00:12:56.119 "uuid": "7fc1a991-5e40-5555-9cee-774c340aa604", 00:12:56.119 "is_configured": true, 00:12:56.119 "data_offset": 2048, 00:12:56.119 "data_size": 63488 00:12:56.119 }, 00:12:56.119 { 00:12:56.119 "name": "BaseBdev2", 00:12:56.119 "uuid": "5a05436f-32eb-5a5a-88a8-d8ca91c9d331", 00:12:56.119 "is_configured": true, 00:12:56.119 "data_offset": 2048, 00:12:56.119 "data_size": 63488 00:12:56.119 }, 00:12:56.119 { 00:12:56.119 "name": "BaseBdev3", 00:12:56.119 "uuid": "4fda7ba3-75d3-51c5-af1d-1078c749f928", 00:12:56.119 "is_configured": true, 00:12:56.119 "data_offset": 2048, 00:12:56.119 "data_size": 63488 00:12:56.119 }, 00:12:56.119 { 00:12:56.119 "name": "BaseBdev4", 00:12:56.119 "uuid": "72a380af-009e-5467-a995-8198b66575e7", 00:12:56.119 "is_configured": true, 00:12:56.119 "data_offset": 2048, 00:12:56.119 "data_size": 63488 00:12:56.119 } 00:12:56.119 ] 00:12:56.119 }' 00:12:56.119 11:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.119 11:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.376 11:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:56.376 11:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:56.376 [2024-11-20 11:22:39.487557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:57.312 11:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:57.312 11:22:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.312 11:22:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.312 11:22:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.312 11:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:57.312 11:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:57.312 11:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:57.312 11:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:57.312 11:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.312 11:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.312 11:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:57.312 11:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.312 11:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:57.312 11:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.312 11:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.312 11:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.312 11:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.312 11:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.312 11:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.312 11:22:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.312 11:22:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.312 11:22:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.571 11:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.571 "name": "raid_bdev1", 00:12:57.571 "uuid": "dcd41bf2-f230-4f82-b3ae-229d302a96ea", 00:12:57.571 "strip_size_kb": 64, 00:12:57.571 "state": "online", 00:12:57.571 "raid_level": "concat", 00:12:57.571 "superblock": true, 00:12:57.571 "num_base_bdevs": 4, 00:12:57.571 "num_base_bdevs_discovered": 4, 00:12:57.571 "num_base_bdevs_operational": 4, 00:12:57.571 "base_bdevs_list": [ 00:12:57.571 { 00:12:57.571 "name": "BaseBdev1", 00:12:57.571 "uuid": "7fc1a991-5e40-5555-9cee-774c340aa604", 00:12:57.571 "is_configured": true, 00:12:57.571 "data_offset": 2048, 00:12:57.571 "data_size": 63488 00:12:57.571 }, 00:12:57.571 { 00:12:57.571 "name": "BaseBdev2", 00:12:57.571 "uuid": "5a05436f-32eb-5a5a-88a8-d8ca91c9d331", 00:12:57.571 "is_configured": true, 00:12:57.571 "data_offset": 2048, 00:12:57.571 "data_size": 63488 00:12:57.571 }, 00:12:57.571 { 00:12:57.571 "name": "BaseBdev3", 00:12:57.571 "uuid": "4fda7ba3-75d3-51c5-af1d-1078c749f928", 00:12:57.571 "is_configured": true, 00:12:57.571 "data_offset": 2048, 00:12:57.571 "data_size": 63488 00:12:57.572 }, 00:12:57.572 { 00:12:57.572 "name": "BaseBdev4", 00:12:57.572 "uuid": "72a380af-009e-5467-a995-8198b66575e7", 00:12:57.572 "is_configured": true, 00:12:57.572 "data_offset": 2048, 00:12:57.572 "data_size": 63488 00:12:57.572 } 00:12:57.572 ] 00:12:57.572 }' 00:12:57.572 11:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.572 11:22:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.830 11:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:57.830 11:22:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.830 11:22:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.830 [2024-11-20 11:22:40.868684] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:57.830 [2024-11-20 11:22:40.868772] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:57.830 [2024-11-20 11:22:40.871404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:57.830 [2024-11-20 11:22:40.871529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.830 [2024-11-20 11:22:40.871598] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:57.830 [2024-11-20 11:22:40.871650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:57.830 11:22:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.830 11:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73184 00:12:57.830 11:22:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73184 ']' 00:12:57.830 11:22:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73184 00:12:57.830 { 00:12:57.830 "results": [ 00:12:57.830 { 00:12:57.830 "job": "raid_bdev1", 00:12:57.830 "core_mask": "0x1", 00:12:57.830 "workload": "randrw", 00:12:57.830 "percentage": 50, 00:12:57.830 "status": "finished", 00:12:57.830 "queue_depth": 1, 00:12:57.830 "io_size": 131072, 00:12:57.830 "runtime": 1.381815, 00:12:57.830 "iops": 14960.758133324649, 00:12:57.830 "mibps": 1870.0947666655811, 00:12:57.830 "io_failed": 1, 00:12:57.830 "io_timeout": 0, 00:12:57.830 "avg_latency_us": 92.88495804911597, 00:12:57.830 "min_latency_us": 27.053275109170304, 00:12:57.830 "max_latency_us": 1745.7187772925763 00:12:57.830 } 00:12:57.830 ], 00:12:57.830 "core_count": 1 00:12:57.830 } 00:12:57.830 11:22:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:57.830 11:22:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:57.830 11:22:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73184 00:12:57.830 11:22:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:57.830 killing process with pid 73184 00:12:57.830 11:22:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:57.830 11:22:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73184' 00:12:57.830 11:22:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73184 00:12:57.831 [2024-11-20 11:22:40.918657] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:57.831 11:22:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73184 00:12:58.397 [2024-11-20 11:22:41.249047] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:59.776 11:22:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2uDJnogayW 00:12:59.776 11:22:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:59.776 11:22:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:59.776 11:22:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:12:59.776 11:22:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:59.776 ************************************ 00:12:59.776 END TEST raid_write_error_test 00:12:59.776 ************************************ 00:12:59.776 11:22:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:59.776 11:22:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:59.776 11:22:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:12:59.776 00:12:59.776 real 0m4.793s 00:12:59.776 user 0m5.702s 00:12:59.776 sys 0m0.560s 00:12:59.776 11:22:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.776 11:22:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.776 11:22:42 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:59.776 11:22:42 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:12:59.776 11:22:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:59.776 11:22:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.776 11:22:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:59.776 ************************************ 00:12:59.776 START TEST raid_state_function_test 00:12:59.776 ************************************ 00:12:59.776 11:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:12:59.776 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:59.776 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:59.776 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:59.776 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:59.776 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73323 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73323' 00:12:59.777 Process raid pid: 73323 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73323 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73323 ']' 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.777 11:22:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.777 [2024-11-20 11:22:42.677356] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:12:59.777 [2024-11-20 11:22:42.677585] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.777 [2024-11-20 11:22:42.860187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.037 [2024-11-20 11:22:42.990748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.296 [2024-11-20 11:22:43.236596] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.296 [2024-11-20 11:22:43.236746] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.555 11:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:00.555 11:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:00.555 11:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:00.555 11:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.555 11:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.555 [2024-11-20 11:22:43.582549] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:00.555 [2024-11-20 11:22:43.582618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:00.555 [2024-11-20 11:22:43.582631] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:00.555 [2024-11-20 11:22:43.582643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:00.555 [2024-11-20 11:22:43.582651] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:00.555 [2024-11-20 11:22:43.582662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:00.555 [2024-11-20 11:22:43.582669] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:00.556 [2024-11-20 11:22:43.582679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:00.556 11:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.556 11:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:00.556 11:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.556 11:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.556 11:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.556 11:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.556 11:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:00.556 11:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.556 11:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.556 11:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.556 11:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.556 11:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.556 11:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.556 11:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.556 11:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.556 11:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.556 11:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.556 "name": "Existed_Raid", 00:13:00.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.556 "strip_size_kb": 0, 00:13:00.556 "state": "configuring", 00:13:00.556 "raid_level": "raid1", 00:13:00.556 "superblock": false, 00:13:00.556 "num_base_bdevs": 4, 00:13:00.556 "num_base_bdevs_discovered": 0, 00:13:00.556 "num_base_bdevs_operational": 4, 00:13:00.556 "base_bdevs_list": [ 00:13:00.556 { 00:13:00.556 "name": "BaseBdev1", 00:13:00.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.556 "is_configured": false, 00:13:00.556 "data_offset": 0, 00:13:00.556 "data_size": 0 00:13:00.556 }, 00:13:00.556 { 00:13:00.556 "name": "BaseBdev2", 00:13:00.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.556 "is_configured": false, 00:13:00.556 "data_offset": 0, 00:13:00.556 "data_size": 0 00:13:00.556 }, 00:13:00.556 { 00:13:00.556 "name": "BaseBdev3", 00:13:00.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.556 "is_configured": false, 00:13:00.556 "data_offset": 0, 00:13:00.556 "data_size": 0 00:13:00.556 }, 00:13:00.556 { 00:13:00.556 "name": "BaseBdev4", 00:13:00.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.556 "is_configured": false, 00:13:00.556 "data_offset": 0, 00:13:00.556 "data_size": 0 00:13:00.556 } 00:13:00.556 ] 00:13:00.556 }' 00:13:00.556 11:22:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.556 11:22:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.126 [2024-11-20 11:22:44.077649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:01.126 [2024-11-20 11:22:44.077768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.126 [2024-11-20 11:22:44.089606] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:01.126 [2024-11-20 11:22:44.089703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:01.126 [2024-11-20 11:22:44.089737] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:01.126 [2024-11-20 11:22:44.089763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:01.126 [2024-11-20 11:22:44.089807] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:01.126 [2024-11-20 11:22:44.089832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:01.126 [2024-11-20 11:22:44.089860] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:01.126 [2024-11-20 11:22:44.089912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.126 [2024-11-20 11:22:44.142477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:01.126 BaseBdev1 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.126 [ 00:13:01.126 { 00:13:01.126 "name": "BaseBdev1", 00:13:01.126 "aliases": [ 00:13:01.126 "20f2d968-0226-4a58-bea2-0e9880a83fff" 00:13:01.126 ], 00:13:01.126 "product_name": "Malloc disk", 00:13:01.126 "block_size": 512, 00:13:01.126 "num_blocks": 65536, 00:13:01.126 "uuid": "20f2d968-0226-4a58-bea2-0e9880a83fff", 00:13:01.126 "assigned_rate_limits": { 00:13:01.126 "rw_ios_per_sec": 0, 00:13:01.126 "rw_mbytes_per_sec": 0, 00:13:01.126 "r_mbytes_per_sec": 0, 00:13:01.126 "w_mbytes_per_sec": 0 00:13:01.126 }, 00:13:01.126 "claimed": true, 00:13:01.126 "claim_type": "exclusive_write", 00:13:01.126 "zoned": false, 00:13:01.126 "supported_io_types": { 00:13:01.126 "read": true, 00:13:01.126 "write": true, 00:13:01.126 "unmap": true, 00:13:01.126 "flush": true, 00:13:01.126 "reset": true, 00:13:01.126 "nvme_admin": false, 00:13:01.126 "nvme_io": false, 00:13:01.126 "nvme_io_md": false, 00:13:01.126 "write_zeroes": true, 00:13:01.126 "zcopy": true, 00:13:01.126 "get_zone_info": false, 00:13:01.126 "zone_management": false, 00:13:01.126 "zone_append": false, 00:13:01.126 "compare": false, 00:13:01.126 "compare_and_write": false, 00:13:01.126 "abort": true, 00:13:01.126 "seek_hole": false, 00:13:01.126 "seek_data": false, 00:13:01.126 "copy": true, 00:13:01.126 "nvme_iov_md": false 00:13:01.126 }, 00:13:01.126 "memory_domains": [ 00:13:01.126 { 00:13:01.126 "dma_device_id": "system", 00:13:01.126 "dma_device_type": 1 00:13:01.126 }, 00:13:01.126 { 00:13:01.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.126 "dma_device_type": 2 00:13:01.126 } 00:13:01.126 ], 00:13:01.126 "driver_specific": {} 00:13:01.126 } 00:13:01.126 ] 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.126 "name": "Existed_Raid", 00:13:01.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.126 "strip_size_kb": 0, 00:13:01.126 "state": "configuring", 00:13:01.126 "raid_level": "raid1", 00:13:01.126 "superblock": false, 00:13:01.126 "num_base_bdevs": 4, 00:13:01.126 "num_base_bdevs_discovered": 1, 00:13:01.126 "num_base_bdevs_operational": 4, 00:13:01.126 "base_bdevs_list": [ 00:13:01.126 { 00:13:01.126 "name": "BaseBdev1", 00:13:01.126 "uuid": "20f2d968-0226-4a58-bea2-0e9880a83fff", 00:13:01.126 "is_configured": true, 00:13:01.126 "data_offset": 0, 00:13:01.126 "data_size": 65536 00:13:01.126 }, 00:13:01.126 { 00:13:01.126 "name": "BaseBdev2", 00:13:01.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.126 "is_configured": false, 00:13:01.126 "data_offset": 0, 00:13:01.126 "data_size": 0 00:13:01.126 }, 00:13:01.126 { 00:13:01.126 "name": "BaseBdev3", 00:13:01.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.126 "is_configured": false, 00:13:01.126 "data_offset": 0, 00:13:01.126 "data_size": 0 00:13:01.126 }, 00:13:01.126 { 00:13:01.126 "name": "BaseBdev4", 00:13:01.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.126 "is_configured": false, 00:13:01.126 "data_offset": 0, 00:13:01.126 "data_size": 0 00:13:01.126 } 00:13:01.126 ] 00:13:01.126 }' 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.126 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.696 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:01.696 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.696 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.696 [2024-11-20 11:22:44.613778] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:01.696 [2024-11-20 11:22:44.613842] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:01.696 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.696 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:01.696 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.696 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.696 [2024-11-20 11:22:44.625816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:01.696 [2024-11-20 11:22:44.628095] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:01.696 [2024-11-20 11:22:44.628192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:01.696 [2024-11-20 11:22:44.628227] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:01.696 [2024-11-20 11:22:44.628258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:01.696 [2024-11-20 11:22:44.628281] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:01.696 [2024-11-20 11:22:44.628307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:01.696 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.697 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:01.697 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:01.697 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:01.697 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.697 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.697 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.697 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.697 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:01.697 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.697 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.697 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.697 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.697 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.697 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.697 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.697 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.697 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.697 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.697 "name": "Existed_Raid", 00:13:01.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.697 "strip_size_kb": 0, 00:13:01.697 "state": "configuring", 00:13:01.697 "raid_level": "raid1", 00:13:01.697 "superblock": false, 00:13:01.697 "num_base_bdevs": 4, 00:13:01.697 "num_base_bdevs_discovered": 1, 00:13:01.697 "num_base_bdevs_operational": 4, 00:13:01.697 "base_bdevs_list": [ 00:13:01.697 { 00:13:01.697 "name": "BaseBdev1", 00:13:01.697 "uuid": "20f2d968-0226-4a58-bea2-0e9880a83fff", 00:13:01.697 "is_configured": true, 00:13:01.697 "data_offset": 0, 00:13:01.697 "data_size": 65536 00:13:01.697 }, 00:13:01.697 { 00:13:01.697 "name": "BaseBdev2", 00:13:01.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.697 "is_configured": false, 00:13:01.697 "data_offset": 0, 00:13:01.697 "data_size": 0 00:13:01.697 }, 00:13:01.697 { 00:13:01.697 "name": "BaseBdev3", 00:13:01.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.697 "is_configured": false, 00:13:01.697 "data_offset": 0, 00:13:01.697 "data_size": 0 00:13:01.697 }, 00:13:01.697 { 00:13:01.697 "name": "BaseBdev4", 00:13:01.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.697 "is_configured": false, 00:13:01.697 "data_offset": 0, 00:13:01.697 "data_size": 0 00:13:01.697 } 00:13:01.697 ] 00:13:01.697 }' 00:13:01.697 11:22:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.697 11:22:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.267 [2024-11-20 11:22:45.148552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:02.267 BaseBdev2 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.267 [ 00:13:02.267 { 00:13:02.267 "name": "BaseBdev2", 00:13:02.267 "aliases": [ 00:13:02.267 "07728b54-6f16-4fea-9ed8-628be5da7b7a" 00:13:02.267 ], 00:13:02.267 "product_name": "Malloc disk", 00:13:02.267 "block_size": 512, 00:13:02.267 "num_blocks": 65536, 00:13:02.267 "uuid": "07728b54-6f16-4fea-9ed8-628be5da7b7a", 00:13:02.267 "assigned_rate_limits": { 00:13:02.267 "rw_ios_per_sec": 0, 00:13:02.267 "rw_mbytes_per_sec": 0, 00:13:02.267 "r_mbytes_per_sec": 0, 00:13:02.267 "w_mbytes_per_sec": 0 00:13:02.267 }, 00:13:02.267 "claimed": true, 00:13:02.267 "claim_type": "exclusive_write", 00:13:02.267 "zoned": false, 00:13:02.267 "supported_io_types": { 00:13:02.267 "read": true, 00:13:02.267 "write": true, 00:13:02.267 "unmap": true, 00:13:02.267 "flush": true, 00:13:02.267 "reset": true, 00:13:02.267 "nvme_admin": false, 00:13:02.267 "nvme_io": false, 00:13:02.267 "nvme_io_md": false, 00:13:02.267 "write_zeroes": true, 00:13:02.267 "zcopy": true, 00:13:02.267 "get_zone_info": false, 00:13:02.267 "zone_management": false, 00:13:02.267 "zone_append": false, 00:13:02.267 "compare": false, 00:13:02.267 "compare_and_write": false, 00:13:02.267 "abort": true, 00:13:02.267 "seek_hole": false, 00:13:02.267 "seek_data": false, 00:13:02.267 "copy": true, 00:13:02.267 "nvme_iov_md": false 00:13:02.267 }, 00:13:02.267 "memory_domains": [ 00:13:02.267 { 00:13:02.267 "dma_device_id": "system", 00:13:02.267 "dma_device_type": 1 00:13:02.267 }, 00:13:02.267 { 00:13:02.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.267 "dma_device_type": 2 00:13:02.267 } 00:13:02.267 ], 00:13:02.267 "driver_specific": {} 00:13:02.267 } 00:13:02.267 ] 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.267 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.267 "name": "Existed_Raid", 00:13:02.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.267 "strip_size_kb": 0, 00:13:02.267 "state": "configuring", 00:13:02.267 "raid_level": "raid1", 00:13:02.267 "superblock": false, 00:13:02.267 "num_base_bdevs": 4, 00:13:02.267 "num_base_bdevs_discovered": 2, 00:13:02.267 "num_base_bdevs_operational": 4, 00:13:02.267 "base_bdevs_list": [ 00:13:02.267 { 00:13:02.267 "name": "BaseBdev1", 00:13:02.267 "uuid": "20f2d968-0226-4a58-bea2-0e9880a83fff", 00:13:02.267 "is_configured": true, 00:13:02.267 "data_offset": 0, 00:13:02.267 "data_size": 65536 00:13:02.267 }, 00:13:02.267 { 00:13:02.267 "name": "BaseBdev2", 00:13:02.267 "uuid": "07728b54-6f16-4fea-9ed8-628be5da7b7a", 00:13:02.267 "is_configured": true, 00:13:02.267 "data_offset": 0, 00:13:02.267 "data_size": 65536 00:13:02.267 }, 00:13:02.267 { 00:13:02.267 "name": "BaseBdev3", 00:13:02.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.267 "is_configured": false, 00:13:02.267 "data_offset": 0, 00:13:02.267 "data_size": 0 00:13:02.267 }, 00:13:02.267 { 00:13:02.267 "name": "BaseBdev4", 00:13:02.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.267 "is_configured": false, 00:13:02.268 "data_offset": 0, 00:13:02.268 "data_size": 0 00:13:02.268 } 00:13:02.268 ] 00:13:02.268 }' 00:13:02.268 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.268 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.846 [2024-11-20 11:22:45.707464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:02.846 BaseBdev3 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.846 [ 00:13:02.846 { 00:13:02.846 "name": "BaseBdev3", 00:13:02.846 "aliases": [ 00:13:02.846 "b47656b9-a4ba-4001-9dc1-52d0c9459290" 00:13:02.846 ], 00:13:02.846 "product_name": "Malloc disk", 00:13:02.846 "block_size": 512, 00:13:02.846 "num_blocks": 65536, 00:13:02.846 "uuid": "b47656b9-a4ba-4001-9dc1-52d0c9459290", 00:13:02.846 "assigned_rate_limits": { 00:13:02.846 "rw_ios_per_sec": 0, 00:13:02.846 "rw_mbytes_per_sec": 0, 00:13:02.846 "r_mbytes_per_sec": 0, 00:13:02.846 "w_mbytes_per_sec": 0 00:13:02.846 }, 00:13:02.846 "claimed": true, 00:13:02.846 "claim_type": "exclusive_write", 00:13:02.846 "zoned": false, 00:13:02.846 "supported_io_types": { 00:13:02.846 "read": true, 00:13:02.846 "write": true, 00:13:02.846 "unmap": true, 00:13:02.846 "flush": true, 00:13:02.846 "reset": true, 00:13:02.846 "nvme_admin": false, 00:13:02.846 "nvme_io": false, 00:13:02.846 "nvme_io_md": false, 00:13:02.846 "write_zeroes": true, 00:13:02.846 "zcopy": true, 00:13:02.846 "get_zone_info": false, 00:13:02.846 "zone_management": false, 00:13:02.846 "zone_append": false, 00:13:02.846 "compare": false, 00:13:02.846 "compare_and_write": false, 00:13:02.846 "abort": true, 00:13:02.846 "seek_hole": false, 00:13:02.846 "seek_data": false, 00:13:02.846 "copy": true, 00:13:02.846 "nvme_iov_md": false 00:13:02.846 }, 00:13:02.846 "memory_domains": [ 00:13:02.846 { 00:13:02.846 "dma_device_id": "system", 00:13:02.846 "dma_device_type": 1 00:13:02.846 }, 00:13:02.846 { 00:13:02.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.846 "dma_device_type": 2 00:13:02.846 } 00:13:02.846 ], 00:13:02.846 "driver_specific": {} 00:13:02.846 } 00:13:02.846 ] 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.846 "name": "Existed_Raid", 00:13:02.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.846 "strip_size_kb": 0, 00:13:02.846 "state": "configuring", 00:13:02.846 "raid_level": "raid1", 00:13:02.846 "superblock": false, 00:13:02.846 "num_base_bdevs": 4, 00:13:02.846 "num_base_bdevs_discovered": 3, 00:13:02.846 "num_base_bdevs_operational": 4, 00:13:02.846 "base_bdevs_list": [ 00:13:02.846 { 00:13:02.846 "name": "BaseBdev1", 00:13:02.846 "uuid": "20f2d968-0226-4a58-bea2-0e9880a83fff", 00:13:02.846 "is_configured": true, 00:13:02.846 "data_offset": 0, 00:13:02.846 "data_size": 65536 00:13:02.846 }, 00:13:02.846 { 00:13:02.846 "name": "BaseBdev2", 00:13:02.846 "uuid": "07728b54-6f16-4fea-9ed8-628be5da7b7a", 00:13:02.846 "is_configured": true, 00:13:02.846 "data_offset": 0, 00:13:02.846 "data_size": 65536 00:13:02.846 }, 00:13:02.846 { 00:13:02.846 "name": "BaseBdev3", 00:13:02.846 "uuid": "b47656b9-a4ba-4001-9dc1-52d0c9459290", 00:13:02.846 "is_configured": true, 00:13:02.846 "data_offset": 0, 00:13:02.846 "data_size": 65536 00:13:02.846 }, 00:13:02.846 { 00:13:02.846 "name": "BaseBdev4", 00:13:02.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.846 "is_configured": false, 00:13:02.846 "data_offset": 0, 00:13:02.846 "data_size": 0 00:13:02.846 } 00:13:02.846 ] 00:13:02.846 }' 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.846 11:22:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.122 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:03.122 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.122 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.122 [2024-11-20 11:22:46.216007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:03.122 [2024-11-20 11:22:46.216065] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:03.122 [2024-11-20 11:22:46.216073] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:03.122 [2024-11-20 11:22:46.216343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:03.122 [2024-11-20 11:22:46.216552] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:03.122 [2024-11-20 11:22:46.216570] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:03.122 [2024-11-20 11:22:46.216882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.122 BaseBdev4 00:13:03.122 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.122 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:03.122 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:03.122 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:03.122 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:03.122 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:03.122 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:03.122 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:03.122 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.122 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.122 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.122 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:03.122 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.122 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.383 [ 00:13:03.383 { 00:13:03.383 "name": "BaseBdev4", 00:13:03.383 "aliases": [ 00:13:03.383 "5b635805-0924-4c68-a3f5-14ed385e976f" 00:13:03.383 ], 00:13:03.383 "product_name": "Malloc disk", 00:13:03.383 "block_size": 512, 00:13:03.383 "num_blocks": 65536, 00:13:03.383 "uuid": "5b635805-0924-4c68-a3f5-14ed385e976f", 00:13:03.383 "assigned_rate_limits": { 00:13:03.383 "rw_ios_per_sec": 0, 00:13:03.383 "rw_mbytes_per_sec": 0, 00:13:03.383 "r_mbytes_per_sec": 0, 00:13:03.383 "w_mbytes_per_sec": 0 00:13:03.383 }, 00:13:03.383 "claimed": true, 00:13:03.383 "claim_type": "exclusive_write", 00:13:03.383 "zoned": false, 00:13:03.383 "supported_io_types": { 00:13:03.383 "read": true, 00:13:03.383 "write": true, 00:13:03.383 "unmap": true, 00:13:03.383 "flush": true, 00:13:03.383 "reset": true, 00:13:03.383 "nvme_admin": false, 00:13:03.383 "nvme_io": false, 00:13:03.383 "nvme_io_md": false, 00:13:03.383 "write_zeroes": true, 00:13:03.383 "zcopy": true, 00:13:03.383 "get_zone_info": false, 00:13:03.383 "zone_management": false, 00:13:03.383 "zone_append": false, 00:13:03.383 "compare": false, 00:13:03.383 "compare_and_write": false, 00:13:03.383 "abort": true, 00:13:03.383 "seek_hole": false, 00:13:03.383 "seek_data": false, 00:13:03.383 "copy": true, 00:13:03.383 "nvme_iov_md": false 00:13:03.383 }, 00:13:03.383 "memory_domains": [ 00:13:03.383 { 00:13:03.383 "dma_device_id": "system", 00:13:03.383 "dma_device_type": 1 00:13:03.383 }, 00:13:03.383 { 00:13:03.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.383 "dma_device_type": 2 00:13:03.383 } 00:13:03.383 ], 00:13:03.383 "driver_specific": {} 00:13:03.383 } 00:13:03.383 ] 00:13:03.383 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.383 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:03.383 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:03.383 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:03.383 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:03.383 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.383 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.383 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.383 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.383 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:03.383 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.383 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.383 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.383 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.383 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.383 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.383 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.383 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.383 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.383 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.383 "name": "Existed_Raid", 00:13:03.383 "uuid": "fb04af19-ad33-44a2-8aff-3c4079d33dc7", 00:13:03.383 "strip_size_kb": 0, 00:13:03.383 "state": "online", 00:13:03.383 "raid_level": "raid1", 00:13:03.383 "superblock": false, 00:13:03.383 "num_base_bdevs": 4, 00:13:03.383 "num_base_bdevs_discovered": 4, 00:13:03.383 "num_base_bdevs_operational": 4, 00:13:03.383 "base_bdevs_list": [ 00:13:03.383 { 00:13:03.383 "name": "BaseBdev1", 00:13:03.383 "uuid": "20f2d968-0226-4a58-bea2-0e9880a83fff", 00:13:03.383 "is_configured": true, 00:13:03.383 "data_offset": 0, 00:13:03.383 "data_size": 65536 00:13:03.383 }, 00:13:03.383 { 00:13:03.383 "name": "BaseBdev2", 00:13:03.383 "uuid": "07728b54-6f16-4fea-9ed8-628be5da7b7a", 00:13:03.383 "is_configured": true, 00:13:03.383 "data_offset": 0, 00:13:03.383 "data_size": 65536 00:13:03.383 }, 00:13:03.383 { 00:13:03.383 "name": "BaseBdev3", 00:13:03.383 "uuid": "b47656b9-a4ba-4001-9dc1-52d0c9459290", 00:13:03.383 "is_configured": true, 00:13:03.383 "data_offset": 0, 00:13:03.383 "data_size": 65536 00:13:03.383 }, 00:13:03.383 { 00:13:03.383 "name": "BaseBdev4", 00:13:03.383 "uuid": "5b635805-0924-4c68-a3f5-14ed385e976f", 00:13:03.383 "is_configured": true, 00:13:03.383 "data_offset": 0, 00:13:03.383 "data_size": 65536 00:13:03.383 } 00:13:03.383 ] 00:13:03.383 }' 00:13:03.383 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.383 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.643 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:03.643 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:03.643 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:03.644 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:03.644 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:03.644 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:03.644 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:03.644 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.644 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.644 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:03.644 [2024-11-20 11:22:46.727827] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:03.644 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:03.904 "name": "Existed_Raid", 00:13:03.904 "aliases": [ 00:13:03.904 "fb04af19-ad33-44a2-8aff-3c4079d33dc7" 00:13:03.904 ], 00:13:03.904 "product_name": "Raid Volume", 00:13:03.904 "block_size": 512, 00:13:03.904 "num_blocks": 65536, 00:13:03.904 "uuid": "fb04af19-ad33-44a2-8aff-3c4079d33dc7", 00:13:03.904 "assigned_rate_limits": { 00:13:03.904 "rw_ios_per_sec": 0, 00:13:03.904 "rw_mbytes_per_sec": 0, 00:13:03.904 "r_mbytes_per_sec": 0, 00:13:03.904 "w_mbytes_per_sec": 0 00:13:03.904 }, 00:13:03.904 "claimed": false, 00:13:03.904 "zoned": false, 00:13:03.904 "supported_io_types": { 00:13:03.904 "read": true, 00:13:03.904 "write": true, 00:13:03.904 "unmap": false, 00:13:03.904 "flush": false, 00:13:03.904 "reset": true, 00:13:03.904 "nvme_admin": false, 00:13:03.904 "nvme_io": false, 00:13:03.904 "nvme_io_md": false, 00:13:03.904 "write_zeroes": true, 00:13:03.904 "zcopy": false, 00:13:03.904 "get_zone_info": false, 00:13:03.904 "zone_management": false, 00:13:03.904 "zone_append": false, 00:13:03.904 "compare": false, 00:13:03.904 "compare_and_write": false, 00:13:03.904 "abort": false, 00:13:03.904 "seek_hole": false, 00:13:03.904 "seek_data": false, 00:13:03.904 "copy": false, 00:13:03.904 "nvme_iov_md": false 00:13:03.904 }, 00:13:03.904 "memory_domains": [ 00:13:03.904 { 00:13:03.904 "dma_device_id": "system", 00:13:03.904 "dma_device_type": 1 00:13:03.904 }, 00:13:03.904 { 00:13:03.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.904 "dma_device_type": 2 00:13:03.904 }, 00:13:03.904 { 00:13:03.904 "dma_device_id": "system", 00:13:03.904 "dma_device_type": 1 00:13:03.904 }, 00:13:03.904 { 00:13:03.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.904 "dma_device_type": 2 00:13:03.904 }, 00:13:03.904 { 00:13:03.904 "dma_device_id": "system", 00:13:03.904 "dma_device_type": 1 00:13:03.904 }, 00:13:03.904 { 00:13:03.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.904 "dma_device_type": 2 00:13:03.904 }, 00:13:03.904 { 00:13:03.904 "dma_device_id": "system", 00:13:03.904 "dma_device_type": 1 00:13:03.904 }, 00:13:03.904 { 00:13:03.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.904 "dma_device_type": 2 00:13:03.904 } 00:13:03.904 ], 00:13:03.904 "driver_specific": { 00:13:03.904 "raid": { 00:13:03.904 "uuid": "fb04af19-ad33-44a2-8aff-3c4079d33dc7", 00:13:03.904 "strip_size_kb": 0, 00:13:03.904 "state": "online", 00:13:03.904 "raid_level": "raid1", 00:13:03.904 "superblock": false, 00:13:03.904 "num_base_bdevs": 4, 00:13:03.904 "num_base_bdevs_discovered": 4, 00:13:03.904 "num_base_bdevs_operational": 4, 00:13:03.904 "base_bdevs_list": [ 00:13:03.904 { 00:13:03.904 "name": "BaseBdev1", 00:13:03.904 "uuid": "20f2d968-0226-4a58-bea2-0e9880a83fff", 00:13:03.904 "is_configured": true, 00:13:03.904 "data_offset": 0, 00:13:03.904 "data_size": 65536 00:13:03.904 }, 00:13:03.904 { 00:13:03.904 "name": "BaseBdev2", 00:13:03.904 "uuid": "07728b54-6f16-4fea-9ed8-628be5da7b7a", 00:13:03.904 "is_configured": true, 00:13:03.904 "data_offset": 0, 00:13:03.904 "data_size": 65536 00:13:03.904 }, 00:13:03.904 { 00:13:03.904 "name": "BaseBdev3", 00:13:03.904 "uuid": "b47656b9-a4ba-4001-9dc1-52d0c9459290", 00:13:03.904 "is_configured": true, 00:13:03.904 "data_offset": 0, 00:13:03.904 "data_size": 65536 00:13:03.904 }, 00:13:03.904 { 00:13:03.904 "name": "BaseBdev4", 00:13:03.904 "uuid": "5b635805-0924-4c68-a3f5-14ed385e976f", 00:13:03.904 "is_configured": true, 00:13:03.904 "data_offset": 0, 00:13:03.904 "data_size": 65536 00:13:03.904 } 00:13:03.904 ] 00:13:03.904 } 00:13:03.904 } 00:13:03.904 }' 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:03.904 BaseBdev2 00:13:03.904 BaseBdev3 00:13:03.904 BaseBdev4' 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.904 11:22:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.904 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.905 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.905 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.905 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:03.905 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.905 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.905 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.165 [2024-11-20 11:22:47.062901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.165 "name": "Existed_Raid", 00:13:04.165 "uuid": "fb04af19-ad33-44a2-8aff-3c4079d33dc7", 00:13:04.165 "strip_size_kb": 0, 00:13:04.165 "state": "online", 00:13:04.165 "raid_level": "raid1", 00:13:04.165 "superblock": false, 00:13:04.165 "num_base_bdevs": 4, 00:13:04.165 "num_base_bdevs_discovered": 3, 00:13:04.165 "num_base_bdevs_operational": 3, 00:13:04.165 "base_bdevs_list": [ 00:13:04.165 { 00:13:04.165 "name": null, 00:13:04.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.165 "is_configured": false, 00:13:04.165 "data_offset": 0, 00:13:04.165 "data_size": 65536 00:13:04.165 }, 00:13:04.165 { 00:13:04.165 "name": "BaseBdev2", 00:13:04.165 "uuid": "07728b54-6f16-4fea-9ed8-628be5da7b7a", 00:13:04.165 "is_configured": true, 00:13:04.165 "data_offset": 0, 00:13:04.165 "data_size": 65536 00:13:04.165 }, 00:13:04.165 { 00:13:04.165 "name": "BaseBdev3", 00:13:04.165 "uuid": "b47656b9-a4ba-4001-9dc1-52d0c9459290", 00:13:04.165 "is_configured": true, 00:13:04.165 "data_offset": 0, 00:13:04.165 "data_size": 65536 00:13:04.165 }, 00:13:04.165 { 00:13:04.165 "name": "BaseBdev4", 00:13:04.165 "uuid": "5b635805-0924-4c68-a3f5-14ed385e976f", 00:13:04.165 "is_configured": true, 00:13:04.165 "data_offset": 0, 00:13:04.165 "data_size": 65536 00:13:04.165 } 00:13:04.165 ] 00:13:04.165 }' 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.165 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.733 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:04.733 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:04.733 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.733 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:04.733 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.733 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.733 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.733 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:04.733 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:04.733 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:04.733 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.733 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.733 [2024-11-20 11:22:47.641865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:04.733 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.733 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:04.733 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:04.733 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.733 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:04.733 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.733 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.733 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.733 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:04.733 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:04.733 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:04.733 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.733 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.733 [2024-11-20 11:22:47.815053] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:04.991 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.991 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:04.991 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:04.991 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.991 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.991 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:04.991 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.991 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.991 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:04.991 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:04.991 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:04.991 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.991 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.991 [2024-11-20 11:22:47.994204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:04.991 [2024-11-20 11:22:47.994316] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:05.269 [2024-11-20 11:22:48.112810] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.269 [2024-11-20 11:22:48.112877] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.269 [2024-11-20 11:22:48.112892] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.269 BaseBdev2 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:05.269 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.270 [ 00:13:05.270 { 00:13:05.270 "name": "BaseBdev2", 00:13:05.270 "aliases": [ 00:13:05.270 "2ee7db2f-af9c-452d-a820-4d4343c575d9" 00:13:05.270 ], 00:13:05.270 "product_name": "Malloc disk", 00:13:05.270 "block_size": 512, 00:13:05.270 "num_blocks": 65536, 00:13:05.270 "uuid": "2ee7db2f-af9c-452d-a820-4d4343c575d9", 00:13:05.270 "assigned_rate_limits": { 00:13:05.270 "rw_ios_per_sec": 0, 00:13:05.270 "rw_mbytes_per_sec": 0, 00:13:05.270 "r_mbytes_per_sec": 0, 00:13:05.270 "w_mbytes_per_sec": 0 00:13:05.270 }, 00:13:05.270 "claimed": false, 00:13:05.270 "zoned": false, 00:13:05.270 "supported_io_types": { 00:13:05.270 "read": true, 00:13:05.270 "write": true, 00:13:05.270 "unmap": true, 00:13:05.270 "flush": true, 00:13:05.270 "reset": true, 00:13:05.270 "nvme_admin": false, 00:13:05.270 "nvme_io": false, 00:13:05.270 "nvme_io_md": false, 00:13:05.270 "write_zeroes": true, 00:13:05.270 "zcopy": true, 00:13:05.270 "get_zone_info": false, 00:13:05.270 "zone_management": false, 00:13:05.270 "zone_append": false, 00:13:05.270 "compare": false, 00:13:05.270 "compare_and_write": false, 00:13:05.270 "abort": true, 00:13:05.270 "seek_hole": false, 00:13:05.270 "seek_data": false, 00:13:05.270 "copy": true, 00:13:05.270 "nvme_iov_md": false 00:13:05.270 }, 00:13:05.270 "memory_domains": [ 00:13:05.270 { 00:13:05.270 "dma_device_id": "system", 00:13:05.270 "dma_device_type": 1 00:13:05.270 }, 00:13:05.270 { 00:13:05.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.270 "dma_device_type": 2 00:13:05.270 } 00:13:05.270 ], 00:13:05.270 "driver_specific": {} 00:13:05.270 } 00:13:05.270 ] 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.270 BaseBdev3 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.270 [ 00:13:05.270 { 00:13:05.270 "name": "BaseBdev3", 00:13:05.270 "aliases": [ 00:13:05.270 "25ce708d-7472-481d-afac-776accdde5ae" 00:13:05.270 ], 00:13:05.270 "product_name": "Malloc disk", 00:13:05.270 "block_size": 512, 00:13:05.270 "num_blocks": 65536, 00:13:05.270 "uuid": "25ce708d-7472-481d-afac-776accdde5ae", 00:13:05.270 "assigned_rate_limits": { 00:13:05.270 "rw_ios_per_sec": 0, 00:13:05.270 "rw_mbytes_per_sec": 0, 00:13:05.270 "r_mbytes_per_sec": 0, 00:13:05.270 "w_mbytes_per_sec": 0 00:13:05.270 }, 00:13:05.270 "claimed": false, 00:13:05.270 "zoned": false, 00:13:05.270 "supported_io_types": { 00:13:05.270 "read": true, 00:13:05.270 "write": true, 00:13:05.270 "unmap": true, 00:13:05.270 "flush": true, 00:13:05.270 "reset": true, 00:13:05.270 "nvme_admin": false, 00:13:05.270 "nvme_io": false, 00:13:05.270 "nvme_io_md": false, 00:13:05.270 "write_zeroes": true, 00:13:05.270 "zcopy": true, 00:13:05.270 "get_zone_info": false, 00:13:05.270 "zone_management": false, 00:13:05.270 "zone_append": false, 00:13:05.270 "compare": false, 00:13:05.270 "compare_and_write": false, 00:13:05.270 "abort": true, 00:13:05.270 "seek_hole": false, 00:13:05.270 "seek_data": false, 00:13:05.270 "copy": true, 00:13:05.270 "nvme_iov_md": false 00:13:05.270 }, 00:13:05.270 "memory_domains": [ 00:13:05.270 { 00:13:05.270 "dma_device_id": "system", 00:13:05.270 "dma_device_type": 1 00:13:05.270 }, 00:13:05.270 { 00:13:05.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.270 "dma_device_type": 2 00:13:05.270 } 00:13:05.270 ], 00:13:05.270 "driver_specific": {} 00:13:05.270 } 00:13:05.270 ] 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.270 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.529 BaseBdev4 00:13:05.529 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.529 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:05.529 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:05.529 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:05.529 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:05.529 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:05.529 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:05.529 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.530 [ 00:13:05.530 { 00:13:05.530 "name": "BaseBdev4", 00:13:05.530 "aliases": [ 00:13:05.530 "9969b386-9d83-4b39-a5d2-73ed7f9fee27" 00:13:05.530 ], 00:13:05.530 "product_name": "Malloc disk", 00:13:05.530 "block_size": 512, 00:13:05.530 "num_blocks": 65536, 00:13:05.530 "uuid": "9969b386-9d83-4b39-a5d2-73ed7f9fee27", 00:13:05.530 "assigned_rate_limits": { 00:13:05.530 "rw_ios_per_sec": 0, 00:13:05.530 "rw_mbytes_per_sec": 0, 00:13:05.530 "r_mbytes_per_sec": 0, 00:13:05.530 "w_mbytes_per_sec": 0 00:13:05.530 }, 00:13:05.530 "claimed": false, 00:13:05.530 "zoned": false, 00:13:05.530 "supported_io_types": { 00:13:05.530 "read": true, 00:13:05.530 "write": true, 00:13:05.530 "unmap": true, 00:13:05.530 "flush": true, 00:13:05.530 "reset": true, 00:13:05.530 "nvme_admin": false, 00:13:05.530 "nvme_io": false, 00:13:05.530 "nvme_io_md": false, 00:13:05.530 "write_zeroes": true, 00:13:05.530 "zcopy": true, 00:13:05.530 "get_zone_info": false, 00:13:05.530 "zone_management": false, 00:13:05.530 "zone_append": false, 00:13:05.530 "compare": false, 00:13:05.530 "compare_and_write": false, 00:13:05.530 "abort": true, 00:13:05.530 "seek_hole": false, 00:13:05.530 "seek_data": false, 00:13:05.530 "copy": true, 00:13:05.530 "nvme_iov_md": false 00:13:05.530 }, 00:13:05.530 "memory_domains": [ 00:13:05.530 { 00:13:05.530 "dma_device_id": "system", 00:13:05.530 "dma_device_type": 1 00:13:05.530 }, 00:13:05.530 { 00:13:05.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.530 "dma_device_type": 2 00:13:05.530 } 00:13:05.530 ], 00:13:05.530 "driver_specific": {} 00:13:05.530 } 00:13:05.530 ] 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.530 [2024-11-20 11:22:48.427236] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:05.530 [2024-11-20 11:22:48.427299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:05.530 [2024-11-20 11:22:48.427329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:05.530 [2024-11-20 11:22:48.429657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:05.530 [2024-11-20 11:22:48.429711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.530 "name": "Existed_Raid", 00:13:05.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.530 "strip_size_kb": 0, 00:13:05.530 "state": "configuring", 00:13:05.530 "raid_level": "raid1", 00:13:05.530 "superblock": false, 00:13:05.530 "num_base_bdevs": 4, 00:13:05.530 "num_base_bdevs_discovered": 3, 00:13:05.530 "num_base_bdevs_operational": 4, 00:13:05.530 "base_bdevs_list": [ 00:13:05.530 { 00:13:05.530 "name": "BaseBdev1", 00:13:05.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.530 "is_configured": false, 00:13:05.530 "data_offset": 0, 00:13:05.530 "data_size": 0 00:13:05.530 }, 00:13:05.530 { 00:13:05.530 "name": "BaseBdev2", 00:13:05.530 "uuid": "2ee7db2f-af9c-452d-a820-4d4343c575d9", 00:13:05.530 "is_configured": true, 00:13:05.530 "data_offset": 0, 00:13:05.530 "data_size": 65536 00:13:05.530 }, 00:13:05.530 { 00:13:05.530 "name": "BaseBdev3", 00:13:05.530 "uuid": "25ce708d-7472-481d-afac-776accdde5ae", 00:13:05.530 "is_configured": true, 00:13:05.530 "data_offset": 0, 00:13:05.530 "data_size": 65536 00:13:05.530 }, 00:13:05.530 { 00:13:05.530 "name": "BaseBdev4", 00:13:05.530 "uuid": "9969b386-9d83-4b39-a5d2-73ed7f9fee27", 00:13:05.530 "is_configured": true, 00:13:05.530 "data_offset": 0, 00:13:05.530 "data_size": 65536 00:13:05.530 } 00:13:05.530 ] 00:13:05.530 }' 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.530 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.790 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:05.790 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.790 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.790 [2024-11-20 11:22:48.870520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:05.790 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.790 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:05.790 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:05.790 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.790 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.790 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.790 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:05.790 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.790 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.790 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.790 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.790 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.790 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.790 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.790 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.790 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.050 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.050 "name": "Existed_Raid", 00:13:06.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.050 "strip_size_kb": 0, 00:13:06.050 "state": "configuring", 00:13:06.050 "raid_level": "raid1", 00:13:06.050 "superblock": false, 00:13:06.050 "num_base_bdevs": 4, 00:13:06.050 "num_base_bdevs_discovered": 2, 00:13:06.050 "num_base_bdevs_operational": 4, 00:13:06.050 "base_bdevs_list": [ 00:13:06.050 { 00:13:06.050 "name": "BaseBdev1", 00:13:06.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.050 "is_configured": false, 00:13:06.050 "data_offset": 0, 00:13:06.050 "data_size": 0 00:13:06.050 }, 00:13:06.050 { 00:13:06.050 "name": null, 00:13:06.050 "uuid": "2ee7db2f-af9c-452d-a820-4d4343c575d9", 00:13:06.050 "is_configured": false, 00:13:06.050 "data_offset": 0, 00:13:06.050 "data_size": 65536 00:13:06.050 }, 00:13:06.050 { 00:13:06.050 "name": "BaseBdev3", 00:13:06.050 "uuid": "25ce708d-7472-481d-afac-776accdde5ae", 00:13:06.050 "is_configured": true, 00:13:06.050 "data_offset": 0, 00:13:06.050 "data_size": 65536 00:13:06.050 }, 00:13:06.050 { 00:13:06.050 "name": "BaseBdev4", 00:13:06.050 "uuid": "9969b386-9d83-4b39-a5d2-73ed7f9fee27", 00:13:06.050 "is_configured": true, 00:13:06.050 "data_offset": 0, 00:13:06.050 "data_size": 65536 00:13:06.050 } 00:13:06.050 ] 00:13:06.050 }' 00:13:06.050 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.050 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.309 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.309 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:06.309 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.309 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.309 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.309 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:06.309 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:06.309 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.309 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.567 [2024-11-20 11:22:49.437322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:06.567 BaseBdev1 00:13:06.567 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.567 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:06.567 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:06.567 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:06.567 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:06.567 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:06.567 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:06.567 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:06.567 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.567 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.568 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.568 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:06.568 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.568 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.568 [ 00:13:06.568 { 00:13:06.568 "name": "BaseBdev1", 00:13:06.568 "aliases": [ 00:13:06.568 "0eedcda5-bf7b-4eef-bc08-dd66f0f94aeb" 00:13:06.568 ], 00:13:06.568 "product_name": "Malloc disk", 00:13:06.568 "block_size": 512, 00:13:06.568 "num_blocks": 65536, 00:13:06.568 "uuid": "0eedcda5-bf7b-4eef-bc08-dd66f0f94aeb", 00:13:06.568 "assigned_rate_limits": { 00:13:06.568 "rw_ios_per_sec": 0, 00:13:06.568 "rw_mbytes_per_sec": 0, 00:13:06.568 "r_mbytes_per_sec": 0, 00:13:06.568 "w_mbytes_per_sec": 0 00:13:06.568 }, 00:13:06.568 "claimed": true, 00:13:06.568 "claim_type": "exclusive_write", 00:13:06.568 "zoned": false, 00:13:06.568 "supported_io_types": { 00:13:06.568 "read": true, 00:13:06.568 "write": true, 00:13:06.568 "unmap": true, 00:13:06.568 "flush": true, 00:13:06.568 "reset": true, 00:13:06.568 "nvme_admin": false, 00:13:06.568 "nvme_io": false, 00:13:06.568 "nvme_io_md": false, 00:13:06.568 "write_zeroes": true, 00:13:06.568 "zcopy": true, 00:13:06.568 "get_zone_info": false, 00:13:06.568 "zone_management": false, 00:13:06.568 "zone_append": false, 00:13:06.568 "compare": false, 00:13:06.568 "compare_and_write": false, 00:13:06.568 "abort": true, 00:13:06.568 "seek_hole": false, 00:13:06.568 "seek_data": false, 00:13:06.568 "copy": true, 00:13:06.568 "nvme_iov_md": false 00:13:06.568 }, 00:13:06.568 "memory_domains": [ 00:13:06.568 { 00:13:06.568 "dma_device_id": "system", 00:13:06.568 "dma_device_type": 1 00:13:06.568 }, 00:13:06.568 { 00:13:06.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.568 "dma_device_type": 2 00:13:06.568 } 00:13:06.568 ], 00:13:06.568 "driver_specific": {} 00:13:06.568 } 00:13:06.568 ] 00:13:06.568 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.568 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:06.568 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:06.568 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.568 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.568 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.568 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.568 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.568 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.568 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.568 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.568 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.568 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.568 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.568 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.568 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.568 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.568 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.568 "name": "Existed_Raid", 00:13:06.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.568 "strip_size_kb": 0, 00:13:06.568 "state": "configuring", 00:13:06.568 "raid_level": "raid1", 00:13:06.568 "superblock": false, 00:13:06.568 "num_base_bdevs": 4, 00:13:06.568 "num_base_bdevs_discovered": 3, 00:13:06.568 "num_base_bdevs_operational": 4, 00:13:06.568 "base_bdevs_list": [ 00:13:06.568 { 00:13:06.568 "name": "BaseBdev1", 00:13:06.568 "uuid": "0eedcda5-bf7b-4eef-bc08-dd66f0f94aeb", 00:13:06.568 "is_configured": true, 00:13:06.568 "data_offset": 0, 00:13:06.568 "data_size": 65536 00:13:06.568 }, 00:13:06.568 { 00:13:06.568 "name": null, 00:13:06.568 "uuid": "2ee7db2f-af9c-452d-a820-4d4343c575d9", 00:13:06.568 "is_configured": false, 00:13:06.568 "data_offset": 0, 00:13:06.568 "data_size": 65536 00:13:06.568 }, 00:13:06.568 { 00:13:06.568 "name": "BaseBdev3", 00:13:06.568 "uuid": "25ce708d-7472-481d-afac-776accdde5ae", 00:13:06.568 "is_configured": true, 00:13:06.568 "data_offset": 0, 00:13:06.568 "data_size": 65536 00:13:06.568 }, 00:13:06.568 { 00:13:06.568 "name": "BaseBdev4", 00:13:06.568 "uuid": "9969b386-9d83-4b39-a5d2-73ed7f9fee27", 00:13:06.568 "is_configured": true, 00:13:06.568 "data_offset": 0, 00:13:06.568 "data_size": 65536 00:13:06.568 } 00:13:06.568 ] 00:13:06.568 }' 00:13:06.568 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.568 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.137 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.137 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.137 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.137 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:07.137 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.137 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:07.137 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:07.137 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.137 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.137 [2024-11-20 11:22:50.020598] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:07.137 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.137 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:07.137 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.137 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.137 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.137 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.137 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:07.137 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.137 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.137 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.137 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.137 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.137 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.137 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.137 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.137 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.137 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.137 "name": "Existed_Raid", 00:13:07.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.137 "strip_size_kb": 0, 00:13:07.137 "state": "configuring", 00:13:07.137 "raid_level": "raid1", 00:13:07.137 "superblock": false, 00:13:07.137 "num_base_bdevs": 4, 00:13:07.137 "num_base_bdevs_discovered": 2, 00:13:07.137 "num_base_bdevs_operational": 4, 00:13:07.137 "base_bdevs_list": [ 00:13:07.137 { 00:13:07.137 "name": "BaseBdev1", 00:13:07.137 "uuid": "0eedcda5-bf7b-4eef-bc08-dd66f0f94aeb", 00:13:07.137 "is_configured": true, 00:13:07.137 "data_offset": 0, 00:13:07.137 "data_size": 65536 00:13:07.137 }, 00:13:07.137 { 00:13:07.137 "name": null, 00:13:07.137 "uuid": "2ee7db2f-af9c-452d-a820-4d4343c575d9", 00:13:07.137 "is_configured": false, 00:13:07.137 "data_offset": 0, 00:13:07.137 "data_size": 65536 00:13:07.137 }, 00:13:07.137 { 00:13:07.137 "name": null, 00:13:07.137 "uuid": "25ce708d-7472-481d-afac-776accdde5ae", 00:13:07.137 "is_configured": false, 00:13:07.137 "data_offset": 0, 00:13:07.137 "data_size": 65536 00:13:07.137 }, 00:13:07.137 { 00:13:07.137 "name": "BaseBdev4", 00:13:07.137 "uuid": "9969b386-9d83-4b39-a5d2-73ed7f9fee27", 00:13:07.137 "is_configured": true, 00:13:07.137 "data_offset": 0, 00:13:07.137 "data_size": 65536 00:13:07.137 } 00:13:07.137 ] 00:13:07.137 }' 00:13:07.137 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.137 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.400 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:07.400 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.400 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.400 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.400 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.400 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:07.400 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:07.400 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.400 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.400 [2024-11-20 11:22:50.495677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:07.400 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.400 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:07.400 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.400 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.400 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.400 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.400 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:07.400 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.400 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.400 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.400 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.400 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.400 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.400 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.400 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.691 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.691 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.691 "name": "Existed_Raid", 00:13:07.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.691 "strip_size_kb": 0, 00:13:07.691 "state": "configuring", 00:13:07.691 "raid_level": "raid1", 00:13:07.691 "superblock": false, 00:13:07.691 "num_base_bdevs": 4, 00:13:07.691 "num_base_bdevs_discovered": 3, 00:13:07.691 "num_base_bdevs_operational": 4, 00:13:07.691 "base_bdevs_list": [ 00:13:07.691 { 00:13:07.691 "name": "BaseBdev1", 00:13:07.691 "uuid": "0eedcda5-bf7b-4eef-bc08-dd66f0f94aeb", 00:13:07.691 "is_configured": true, 00:13:07.691 "data_offset": 0, 00:13:07.691 "data_size": 65536 00:13:07.691 }, 00:13:07.691 { 00:13:07.691 "name": null, 00:13:07.691 "uuid": "2ee7db2f-af9c-452d-a820-4d4343c575d9", 00:13:07.691 "is_configured": false, 00:13:07.691 "data_offset": 0, 00:13:07.691 "data_size": 65536 00:13:07.691 }, 00:13:07.691 { 00:13:07.691 "name": "BaseBdev3", 00:13:07.691 "uuid": "25ce708d-7472-481d-afac-776accdde5ae", 00:13:07.691 "is_configured": true, 00:13:07.691 "data_offset": 0, 00:13:07.691 "data_size": 65536 00:13:07.691 }, 00:13:07.691 { 00:13:07.691 "name": "BaseBdev4", 00:13:07.691 "uuid": "9969b386-9d83-4b39-a5d2-73ed7f9fee27", 00:13:07.691 "is_configured": true, 00:13:07.691 "data_offset": 0, 00:13:07.691 "data_size": 65536 00:13:07.691 } 00:13:07.691 ] 00:13:07.691 }' 00:13:07.691 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.691 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.951 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:07.951 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.951 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.951 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.951 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.951 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:07.951 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:07.951 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.951 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.951 [2024-11-20 11:22:51.010957] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:08.210 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.210 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:08.210 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:08.210 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:08.210 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.210 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.210 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:08.210 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.210 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.210 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.210 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.210 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.210 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.210 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.210 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.210 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.210 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.210 "name": "Existed_Raid", 00:13:08.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.210 "strip_size_kb": 0, 00:13:08.210 "state": "configuring", 00:13:08.210 "raid_level": "raid1", 00:13:08.210 "superblock": false, 00:13:08.210 "num_base_bdevs": 4, 00:13:08.210 "num_base_bdevs_discovered": 2, 00:13:08.210 "num_base_bdevs_operational": 4, 00:13:08.210 "base_bdevs_list": [ 00:13:08.210 { 00:13:08.211 "name": null, 00:13:08.211 "uuid": "0eedcda5-bf7b-4eef-bc08-dd66f0f94aeb", 00:13:08.211 "is_configured": false, 00:13:08.211 "data_offset": 0, 00:13:08.211 "data_size": 65536 00:13:08.211 }, 00:13:08.211 { 00:13:08.211 "name": null, 00:13:08.211 "uuid": "2ee7db2f-af9c-452d-a820-4d4343c575d9", 00:13:08.211 "is_configured": false, 00:13:08.211 "data_offset": 0, 00:13:08.211 "data_size": 65536 00:13:08.211 }, 00:13:08.211 { 00:13:08.211 "name": "BaseBdev3", 00:13:08.211 "uuid": "25ce708d-7472-481d-afac-776accdde5ae", 00:13:08.211 "is_configured": true, 00:13:08.211 "data_offset": 0, 00:13:08.211 "data_size": 65536 00:13:08.211 }, 00:13:08.211 { 00:13:08.211 "name": "BaseBdev4", 00:13:08.211 "uuid": "9969b386-9d83-4b39-a5d2-73ed7f9fee27", 00:13:08.211 "is_configured": true, 00:13:08.211 "data_offset": 0, 00:13:08.211 "data_size": 65536 00:13:08.211 } 00:13:08.211 ] 00:13:08.211 }' 00:13:08.211 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.211 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.780 [2024-11-20 11:22:51.685638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.780 "name": "Existed_Raid", 00:13:08.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.780 "strip_size_kb": 0, 00:13:08.780 "state": "configuring", 00:13:08.780 "raid_level": "raid1", 00:13:08.780 "superblock": false, 00:13:08.780 "num_base_bdevs": 4, 00:13:08.780 "num_base_bdevs_discovered": 3, 00:13:08.780 "num_base_bdevs_operational": 4, 00:13:08.780 "base_bdevs_list": [ 00:13:08.780 { 00:13:08.780 "name": null, 00:13:08.780 "uuid": "0eedcda5-bf7b-4eef-bc08-dd66f0f94aeb", 00:13:08.780 "is_configured": false, 00:13:08.780 "data_offset": 0, 00:13:08.780 "data_size": 65536 00:13:08.780 }, 00:13:08.780 { 00:13:08.780 "name": "BaseBdev2", 00:13:08.780 "uuid": "2ee7db2f-af9c-452d-a820-4d4343c575d9", 00:13:08.780 "is_configured": true, 00:13:08.780 "data_offset": 0, 00:13:08.780 "data_size": 65536 00:13:08.780 }, 00:13:08.780 { 00:13:08.780 "name": "BaseBdev3", 00:13:08.780 "uuid": "25ce708d-7472-481d-afac-776accdde5ae", 00:13:08.780 "is_configured": true, 00:13:08.780 "data_offset": 0, 00:13:08.780 "data_size": 65536 00:13:08.780 }, 00:13:08.780 { 00:13:08.780 "name": "BaseBdev4", 00:13:08.780 "uuid": "9969b386-9d83-4b39-a5d2-73ed7f9fee27", 00:13:08.780 "is_configured": true, 00:13:08.780 "data_offset": 0, 00:13:08.780 "data_size": 65536 00:13:08.780 } 00:13:08.780 ] 00:13:08.780 }' 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.780 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.039 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.039 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.039 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.039 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:09.039 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0eedcda5-bf7b-4eef-bc08-dd66f0f94aeb 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.298 [2024-11-20 11:22:52.248245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:09.298 [2024-11-20 11:22:52.248404] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:09.298 [2024-11-20 11:22:52.248440] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:09.298 [2024-11-20 11:22:52.248803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:09.298 [2024-11-20 11:22:52.249039] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:09.298 [2024-11-20 11:22:52.249091] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:09.298 [2024-11-20 11:22:52.249463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.298 NewBaseBdev 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.298 [ 00:13:09.298 { 00:13:09.298 "name": "NewBaseBdev", 00:13:09.298 "aliases": [ 00:13:09.298 "0eedcda5-bf7b-4eef-bc08-dd66f0f94aeb" 00:13:09.298 ], 00:13:09.298 "product_name": "Malloc disk", 00:13:09.298 "block_size": 512, 00:13:09.298 "num_blocks": 65536, 00:13:09.298 "uuid": "0eedcda5-bf7b-4eef-bc08-dd66f0f94aeb", 00:13:09.298 "assigned_rate_limits": { 00:13:09.298 "rw_ios_per_sec": 0, 00:13:09.298 "rw_mbytes_per_sec": 0, 00:13:09.298 "r_mbytes_per_sec": 0, 00:13:09.298 "w_mbytes_per_sec": 0 00:13:09.298 }, 00:13:09.298 "claimed": true, 00:13:09.298 "claim_type": "exclusive_write", 00:13:09.298 "zoned": false, 00:13:09.298 "supported_io_types": { 00:13:09.298 "read": true, 00:13:09.298 "write": true, 00:13:09.298 "unmap": true, 00:13:09.298 "flush": true, 00:13:09.298 "reset": true, 00:13:09.298 "nvme_admin": false, 00:13:09.298 "nvme_io": false, 00:13:09.298 "nvme_io_md": false, 00:13:09.298 "write_zeroes": true, 00:13:09.298 "zcopy": true, 00:13:09.298 "get_zone_info": false, 00:13:09.298 "zone_management": false, 00:13:09.298 "zone_append": false, 00:13:09.298 "compare": false, 00:13:09.298 "compare_and_write": false, 00:13:09.298 "abort": true, 00:13:09.298 "seek_hole": false, 00:13:09.298 "seek_data": false, 00:13:09.298 "copy": true, 00:13:09.298 "nvme_iov_md": false 00:13:09.298 }, 00:13:09.298 "memory_domains": [ 00:13:09.298 { 00:13:09.298 "dma_device_id": "system", 00:13:09.298 "dma_device_type": 1 00:13:09.298 }, 00:13:09.298 { 00:13:09.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.298 "dma_device_type": 2 00:13:09.298 } 00:13:09.298 ], 00:13:09.298 "driver_specific": {} 00:13:09.298 } 00:13:09.298 ] 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.298 "name": "Existed_Raid", 00:13:09.298 "uuid": "f550cd2f-35b5-4a0b-ac46-a9216938b468", 00:13:09.298 "strip_size_kb": 0, 00:13:09.298 "state": "online", 00:13:09.298 "raid_level": "raid1", 00:13:09.298 "superblock": false, 00:13:09.298 "num_base_bdevs": 4, 00:13:09.298 "num_base_bdevs_discovered": 4, 00:13:09.298 "num_base_bdevs_operational": 4, 00:13:09.298 "base_bdevs_list": [ 00:13:09.298 { 00:13:09.298 "name": "NewBaseBdev", 00:13:09.298 "uuid": "0eedcda5-bf7b-4eef-bc08-dd66f0f94aeb", 00:13:09.298 "is_configured": true, 00:13:09.298 "data_offset": 0, 00:13:09.298 "data_size": 65536 00:13:09.298 }, 00:13:09.298 { 00:13:09.298 "name": "BaseBdev2", 00:13:09.298 "uuid": "2ee7db2f-af9c-452d-a820-4d4343c575d9", 00:13:09.298 "is_configured": true, 00:13:09.298 "data_offset": 0, 00:13:09.298 "data_size": 65536 00:13:09.298 }, 00:13:09.298 { 00:13:09.298 "name": "BaseBdev3", 00:13:09.298 "uuid": "25ce708d-7472-481d-afac-776accdde5ae", 00:13:09.298 "is_configured": true, 00:13:09.298 "data_offset": 0, 00:13:09.298 "data_size": 65536 00:13:09.298 }, 00:13:09.298 { 00:13:09.298 "name": "BaseBdev4", 00:13:09.298 "uuid": "9969b386-9d83-4b39-a5d2-73ed7f9fee27", 00:13:09.298 "is_configured": true, 00:13:09.298 "data_offset": 0, 00:13:09.298 "data_size": 65536 00:13:09.298 } 00:13:09.298 ] 00:13:09.298 }' 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.298 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:09.867 [2024-11-20 11:22:52.712006] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:09.867 "name": "Existed_Raid", 00:13:09.867 "aliases": [ 00:13:09.867 "f550cd2f-35b5-4a0b-ac46-a9216938b468" 00:13:09.867 ], 00:13:09.867 "product_name": "Raid Volume", 00:13:09.867 "block_size": 512, 00:13:09.867 "num_blocks": 65536, 00:13:09.867 "uuid": "f550cd2f-35b5-4a0b-ac46-a9216938b468", 00:13:09.867 "assigned_rate_limits": { 00:13:09.867 "rw_ios_per_sec": 0, 00:13:09.867 "rw_mbytes_per_sec": 0, 00:13:09.867 "r_mbytes_per_sec": 0, 00:13:09.867 "w_mbytes_per_sec": 0 00:13:09.867 }, 00:13:09.867 "claimed": false, 00:13:09.867 "zoned": false, 00:13:09.867 "supported_io_types": { 00:13:09.867 "read": true, 00:13:09.867 "write": true, 00:13:09.867 "unmap": false, 00:13:09.867 "flush": false, 00:13:09.867 "reset": true, 00:13:09.867 "nvme_admin": false, 00:13:09.867 "nvme_io": false, 00:13:09.867 "nvme_io_md": false, 00:13:09.867 "write_zeroes": true, 00:13:09.867 "zcopy": false, 00:13:09.867 "get_zone_info": false, 00:13:09.867 "zone_management": false, 00:13:09.867 "zone_append": false, 00:13:09.867 "compare": false, 00:13:09.867 "compare_and_write": false, 00:13:09.867 "abort": false, 00:13:09.867 "seek_hole": false, 00:13:09.867 "seek_data": false, 00:13:09.867 "copy": false, 00:13:09.867 "nvme_iov_md": false 00:13:09.867 }, 00:13:09.867 "memory_domains": [ 00:13:09.867 { 00:13:09.867 "dma_device_id": "system", 00:13:09.867 "dma_device_type": 1 00:13:09.867 }, 00:13:09.867 { 00:13:09.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.867 "dma_device_type": 2 00:13:09.867 }, 00:13:09.867 { 00:13:09.867 "dma_device_id": "system", 00:13:09.867 "dma_device_type": 1 00:13:09.867 }, 00:13:09.867 { 00:13:09.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.867 "dma_device_type": 2 00:13:09.867 }, 00:13:09.867 { 00:13:09.867 "dma_device_id": "system", 00:13:09.867 "dma_device_type": 1 00:13:09.867 }, 00:13:09.867 { 00:13:09.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.867 "dma_device_type": 2 00:13:09.867 }, 00:13:09.867 { 00:13:09.867 "dma_device_id": "system", 00:13:09.867 "dma_device_type": 1 00:13:09.867 }, 00:13:09.867 { 00:13:09.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.867 "dma_device_type": 2 00:13:09.867 } 00:13:09.867 ], 00:13:09.867 "driver_specific": { 00:13:09.867 "raid": { 00:13:09.867 "uuid": "f550cd2f-35b5-4a0b-ac46-a9216938b468", 00:13:09.867 "strip_size_kb": 0, 00:13:09.867 "state": "online", 00:13:09.867 "raid_level": "raid1", 00:13:09.867 "superblock": false, 00:13:09.867 "num_base_bdevs": 4, 00:13:09.867 "num_base_bdevs_discovered": 4, 00:13:09.867 "num_base_bdevs_operational": 4, 00:13:09.867 "base_bdevs_list": [ 00:13:09.867 { 00:13:09.867 "name": "NewBaseBdev", 00:13:09.867 "uuid": "0eedcda5-bf7b-4eef-bc08-dd66f0f94aeb", 00:13:09.867 "is_configured": true, 00:13:09.867 "data_offset": 0, 00:13:09.867 "data_size": 65536 00:13:09.867 }, 00:13:09.867 { 00:13:09.867 "name": "BaseBdev2", 00:13:09.867 "uuid": "2ee7db2f-af9c-452d-a820-4d4343c575d9", 00:13:09.867 "is_configured": true, 00:13:09.867 "data_offset": 0, 00:13:09.867 "data_size": 65536 00:13:09.867 }, 00:13:09.867 { 00:13:09.867 "name": "BaseBdev3", 00:13:09.867 "uuid": "25ce708d-7472-481d-afac-776accdde5ae", 00:13:09.867 "is_configured": true, 00:13:09.867 "data_offset": 0, 00:13:09.867 "data_size": 65536 00:13:09.867 }, 00:13:09.867 { 00:13:09.867 "name": "BaseBdev4", 00:13:09.867 "uuid": "9969b386-9d83-4b39-a5d2-73ed7f9fee27", 00:13:09.867 "is_configured": true, 00:13:09.867 "data_offset": 0, 00:13:09.867 "data_size": 65536 00:13:09.867 } 00:13:09.867 ] 00:13:09.867 } 00:13:09.867 } 00:13:09.867 }' 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:09.867 BaseBdev2 00:13:09.867 BaseBdev3 00:13:09.867 BaseBdev4' 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:09.867 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:09.868 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:09.868 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.868 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.868 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:09.868 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.128 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:10.128 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:10.128 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:10.128 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.128 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:10.128 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.128 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.128 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.128 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:10.128 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:10.128 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:10.128 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.128 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.128 [2024-11-20 11:22:53.067136] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:10.128 [2024-11-20 11:22:53.067246] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:10.128 [2024-11-20 11:22:53.067446] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:10.128 [2024-11-20 11:22:53.067889] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:10.128 [2024-11-20 11:22:53.067966] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:10.128 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.128 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73323 00:13:10.128 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73323 ']' 00:13:10.128 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73323 00:13:10.128 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:10.128 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:10.128 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73323 00:13:10.128 killing process with pid 73323 00:13:10.128 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:10.128 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:10.128 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73323' 00:13:10.128 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73323 00:13:10.128 [2024-11-20 11:22:53.112397] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:10.128 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73323 00:13:10.698 [2024-11-20 11:22:53.542799] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:11.635 11:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:11.635 00:13:11.635 real 0m12.137s 00:13:11.635 user 0m19.217s 00:13:11.635 sys 0m2.129s 00:13:11.635 11:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.635 ************************************ 00:13:11.635 END TEST raid_state_function_test 00:13:11.635 ************************************ 00:13:11.635 11:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.895 11:22:54 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:13:11.895 11:22:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:11.895 11:22:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.895 11:22:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:11.895 ************************************ 00:13:11.895 START TEST raid_state_function_test_sb 00:13:11.895 ************************************ 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74000 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74000' 00:13:11.895 Process raid pid: 74000 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74000 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74000 ']' 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.895 11:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.895 [2024-11-20 11:22:54.854133] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:13:11.895 [2024-11-20 11:22:54.854255] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.155 [2024-11-20 11:22:55.031065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.155 [2024-11-20 11:22:55.155375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.415 [2024-11-20 11:22:55.377750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.415 [2024-11-20 11:22:55.377798] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.674 11:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:12.674 11:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:12.674 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:12.674 11:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.674 11:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.674 [2024-11-20 11:22:55.730556] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:12.674 [2024-11-20 11:22:55.730616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:12.674 [2024-11-20 11:22:55.730628] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:12.674 [2024-11-20 11:22:55.730639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:12.674 [2024-11-20 11:22:55.730647] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:12.674 [2024-11-20 11:22:55.730656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:12.674 [2024-11-20 11:22:55.730663] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:12.675 [2024-11-20 11:22:55.730673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:12.675 11:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.675 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:12.675 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.675 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.675 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.675 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.675 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:12.675 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.675 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.675 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.675 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.675 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.675 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.675 11:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.675 11:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.675 11:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.675 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.675 "name": "Existed_Raid", 00:13:12.675 "uuid": "c92b3f52-c9c7-4df6-b215-52a5ab1d1186", 00:13:12.675 "strip_size_kb": 0, 00:13:12.675 "state": "configuring", 00:13:12.675 "raid_level": "raid1", 00:13:12.675 "superblock": true, 00:13:12.675 "num_base_bdevs": 4, 00:13:12.675 "num_base_bdevs_discovered": 0, 00:13:12.675 "num_base_bdevs_operational": 4, 00:13:12.675 "base_bdevs_list": [ 00:13:12.675 { 00:13:12.675 "name": "BaseBdev1", 00:13:12.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.675 "is_configured": false, 00:13:12.675 "data_offset": 0, 00:13:12.675 "data_size": 0 00:13:12.675 }, 00:13:12.675 { 00:13:12.675 "name": "BaseBdev2", 00:13:12.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.675 "is_configured": false, 00:13:12.675 "data_offset": 0, 00:13:12.675 "data_size": 0 00:13:12.675 }, 00:13:12.675 { 00:13:12.675 "name": "BaseBdev3", 00:13:12.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.675 "is_configured": false, 00:13:12.675 "data_offset": 0, 00:13:12.675 "data_size": 0 00:13:12.675 }, 00:13:12.675 { 00:13:12.675 "name": "BaseBdev4", 00:13:12.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.675 "is_configured": false, 00:13:12.675 "data_offset": 0, 00:13:12.675 "data_size": 0 00:13:12.675 } 00:13:12.675 ] 00:13:12.675 }' 00:13:12.675 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.675 11:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.244 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:13.244 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.244 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.244 [2024-11-20 11:22:56.205640] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:13.245 [2024-11-20 11:22:56.205736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.245 [2024-11-20 11:22:56.217649] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:13.245 [2024-11-20 11:22:56.217733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:13.245 [2024-11-20 11:22:56.217765] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:13.245 [2024-11-20 11:22:56.217813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:13.245 [2024-11-20 11:22:56.217842] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:13.245 [2024-11-20 11:22:56.217875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:13.245 [2024-11-20 11:22:56.217920] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:13.245 [2024-11-20 11:22:56.217953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.245 [2024-11-20 11:22:56.267760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:13.245 BaseBdev1 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.245 [ 00:13:13.245 { 00:13:13.245 "name": "BaseBdev1", 00:13:13.245 "aliases": [ 00:13:13.245 "1a9adc04-2fd3-478c-ba1f-9966bff25637" 00:13:13.245 ], 00:13:13.245 "product_name": "Malloc disk", 00:13:13.245 "block_size": 512, 00:13:13.245 "num_blocks": 65536, 00:13:13.245 "uuid": "1a9adc04-2fd3-478c-ba1f-9966bff25637", 00:13:13.245 "assigned_rate_limits": { 00:13:13.245 "rw_ios_per_sec": 0, 00:13:13.245 "rw_mbytes_per_sec": 0, 00:13:13.245 "r_mbytes_per_sec": 0, 00:13:13.245 "w_mbytes_per_sec": 0 00:13:13.245 }, 00:13:13.245 "claimed": true, 00:13:13.245 "claim_type": "exclusive_write", 00:13:13.245 "zoned": false, 00:13:13.245 "supported_io_types": { 00:13:13.245 "read": true, 00:13:13.245 "write": true, 00:13:13.245 "unmap": true, 00:13:13.245 "flush": true, 00:13:13.245 "reset": true, 00:13:13.245 "nvme_admin": false, 00:13:13.245 "nvme_io": false, 00:13:13.245 "nvme_io_md": false, 00:13:13.245 "write_zeroes": true, 00:13:13.245 "zcopy": true, 00:13:13.245 "get_zone_info": false, 00:13:13.245 "zone_management": false, 00:13:13.245 "zone_append": false, 00:13:13.245 "compare": false, 00:13:13.245 "compare_and_write": false, 00:13:13.245 "abort": true, 00:13:13.245 "seek_hole": false, 00:13:13.245 "seek_data": false, 00:13:13.245 "copy": true, 00:13:13.245 "nvme_iov_md": false 00:13:13.245 }, 00:13:13.245 "memory_domains": [ 00:13:13.245 { 00:13:13.245 "dma_device_id": "system", 00:13:13.245 "dma_device_type": 1 00:13:13.245 }, 00:13:13.245 { 00:13:13.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.245 "dma_device_type": 2 00:13:13.245 } 00:13:13.245 ], 00:13:13.245 "driver_specific": {} 00:13:13.245 } 00:13:13.245 ] 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.245 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.504 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.504 "name": "Existed_Raid", 00:13:13.504 "uuid": "25c8483b-707d-4434-a315-8ae475f0efda", 00:13:13.504 "strip_size_kb": 0, 00:13:13.504 "state": "configuring", 00:13:13.504 "raid_level": "raid1", 00:13:13.504 "superblock": true, 00:13:13.504 "num_base_bdevs": 4, 00:13:13.504 "num_base_bdevs_discovered": 1, 00:13:13.504 "num_base_bdevs_operational": 4, 00:13:13.504 "base_bdevs_list": [ 00:13:13.504 { 00:13:13.504 "name": "BaseBdev1", 00:13:13.504 "uuid": "1a9adc04-2fd3-478c-ba1f-9966bff25637", 00:13:13.504 "is_configured": true, 00:13:13.504 "data_offset": 2048, 00:13:13.504 "data_size": 63488 00:13:13.504 }, 00:13:13.504 { 00:13:13.504 "name": "BaseBdev2", 00:13:13.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.504 "is_configured": false, 00:13:13.504 "data_offset": 0, 00:13:13.504 "data_size": 0 00:13:13.504 }, 00:13:13.504 { 00:13:13.504 "name": "BaseBdev3", 00:13:13.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.504 "is_configured": false, 00:13:13.504 "data_offset": 0, 00:13:13.504 "data_size": 0 00:13:13.504 }, 00:13:13.504 { 00:13:13.504 "name": "BaseBdev4", 00:13:13.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.504 "is_configured": false, 00:13:13.504 "data_offset": 0, 00:13:13.504 "data_size": 0 00:13:13.504 } 00:13:13.504 ] 00:13:13.504 }' 00:13:13.504 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.504 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.764 [2024-11-20 11:22:56.774953] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:13.764 [2024-11-20 11:22:56.775060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.764 [2024-11-20 11:22:56.786980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:13.764 [2024-11-20 11:22:56.788841] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:13.764 [2024-11-20 11:22:56.788923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:13.764 [2024-11-20 11:22:56.788959] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:13.764 [2024-11-20 11:22:56.788985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:13.764 [2024-11-20 11:22:56.789046] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:13.764 [2024-11-20 11:22:56.789072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.764 "name": "Existed_Raid", 00:13:13.764 "uuid": "3217ecf4-4a0a-44fc-a2be-dc35e7e74261", 00:13:13.764 "strip_size_kb": 0, 00:13:13.764 "state": "configuring", 00:13:13.764 "raid_level": "raid1", 00:13:13.764 "superblock": true, 00:13:13.764 "num_base_bdevs": 4, 00:13:13.764 "num_base_bdevs_discovered": 1, 00:13:13.764 "num_base_bdevs_operational": 4, 00:13:13.764 "base_bdevs_list": [ 00:13:13.764 { 00:13:13.764 "name": "BaseBdev1", 00:13:13.764 "uuid": "1a9adc04-2fd3-478c-ba1f-9966bff25637", 00:13:13.764 "is_configured": true, 00:13:13.764 "data_offset": 2048, 00:13:13.764 "data_size": 63488 00:13:13.764 }, 00:13:13.764 { 00:13:13.764 "name": "BaseBdev2", 00:13:13.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.764 "is_configured": false, 00:13:13.764 "data_offset": 0, 00:13:13.764 "data_size": 0 00:13:13.764 }, 00:13:13.764 { 00:13:13.764 "name": "BaseBdev3", 00:13:13.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.764 "is_configured": false, 00:13:13.764 "data_offset": 0, 00:13:13.764 "data_size": 0 00:13:13.764 }, 00:13:13.764 { 00:13:13.764 "name": "BaseBdev4", 00:13:13.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.764 "is_configured": false, 00:13:13.764 "data_offset": 0, 00:13:13.764 "data_size": 0 00:13:13.764 } 00:13:13.764 ] 00:13:13.764 }' 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.764 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.334 [2024-11-20 11:22:57.336825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:14.334 BaseBdev2 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.334 [ 00:13:14.334 { 00:13:14.334 "name": "BaseBdev2", 00:13:14.334 "aliases": [ 00:13:14.334 "654425b2-ba0a-4b5f-93e3-1a0ca1bc87cf" 00:13:14.334 ], 00:13:14.334 "product_name": "Malloc disk", 00:13:14.334 "block_size": 512, 00:13:14.334 "num_blocks": 65536, 00:13:14.334 "uuid": "654425b2-ba0a-4b5f-93e3-1a0ca1bc87cf", 00:13:14.334 "assigned_rate_limits": { 00:13:14.334 "rw_ios_per_sec": 0, 00:13:14.334 "rw_mbytes_per_sec": 0, 00:13:14.334 "r_mbytes_per_sec": 0, 00:13:14.334 "w_mbytes_per_sec": 0 00:13:14.334 }, 00:13:14.334 "claimed": true, 00:13:14.334 "claim_type": "exclusive_write", 00:13:14.334 "zoned": false, 00:13:14.334 "supported_io_types": { 00:13:14.334 "read": true, 00:13:14.334 "write": true, 00:13:14.334 "unmap": true, 00:13:14.334 "flush": true, 00:13:14.334 "reset": true, 00:13:14.334 "nvme_admin": false, 00:13:14.334 "nvme_io": false, 00:13:14.334 "nvme_io_md": false, 00:13:14.334 "write_zeroes": true, 00:13:14.334 "zcopy": true, 00:13:14.334 "get_zone_info": false, 00:13:14.334 "zone_management": false, 00:13:14.334 "zone_append": false, 00:13:14.334 "compare": false, 00:13:14.334 "compare_and_write": false, 00:13:14.334 "abort": true, 00:13:14.334 "seek_hole": false, 00:13:14.334 "seek_data": false, 00:13:14.334 "copy": true, 00:13:14.334 "nvme_iov_md": false 00:13:14.334 }, 00:13:14.334 "memory_domains": [ 00:13:14.334 { 00:13:14.334 "dma_device_id": "system", 00:13:14.334 "dma_device_type": 1 00:13:14.334 }, 00:13:14.334 { 00:13:14.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.334 "dma_device_type": 2 00:13:14.334 } 00:13:14.334 ], 00:13:14.334 "driver_specific": {} 00:13:14.334 } 00:13:14.334 ] 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.334 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.334 "name": "Existed_Raid", 00:13:14.334 "uuid": "3217ecf4-4a0a-44fc-a2be-dc35e7e74261", 00:13:14.334 "strip_size_kb": 0, 00:13:14.334 "state": "configuring", 00:13:14.334 "raid_level": "raid1", 00:13:14.334 "superblock": true, 00:13:14.334 "num_base_bdevs": 4, 00:13:14.334 "num_base_bdevs_discovered": 2, 00:13:14.334 "num_base_bdevs_operational": 4, 00:13:14.334 "base_bdevs_list": [ 00:13:14.334 { 00:13:14.335 "name": "BaseBdev1", 00:13:14.335 "uuid": "1a9adc04-2fd3-478c-ba1f-9966bff25637", 00:13:14.335 "is_configured": true, 00:13:14.335 "data_offset": 2048, 00:13:14.335 "data_size": 63488 00:13:14.335 }, 00:13:14.335 { 00:13:14.335 "name": "BaseBdev2", 00:13:14.335 "uuid": "654425b2-ba0a-4b5f-93e3-1a0ca1bc87cf", 00:13:14.335 "is_configured": true, 00:13:14.335 "data_offset": 2048, 00:13:14.335 "data_size": 63488 00:13:14.335 }, 00:13:14.335 { 00:13:14.335 "name": "BaseBdev3", 00:13:14.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.335 "is_configured": false, 00:13:14.335 "data_offset": 0, 00:13:14.335 "data_size": 0 00:13:14.335 }, 00:13:14.335 { 00:13:14.335 "name": "BaseBdev4", 00:13:14.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.335 "is_configured": false, 00:13:14.335 "data_offset": 0, 00:13:14.335 "data_size": 0 00:13:14.335 } 00:13:14.335 ] 00:13:14.335 }' 00:13:14.335 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.335 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.901 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:14.901 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.901 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.901 [2024-11-20 11:22:57.885981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:14.901 BaseBdev3 00:13:14.901 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.901 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:14.901 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:14.901 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:14.901 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:14.901 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:14.901 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:14.901 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:14.901 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.901 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.901 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.901 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:14.901 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.901 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.901 [ 00:13:14.901 { 00:13:14.901 "name": "BaseBdev3", 00:13:14.901 "aliases": [ 00:13:14.901 "056ae784-d633-4c76-8ede-6da2762a78d1" 00:13:14.901 ], 00:13:14.901 "product_name": "Malloc disk", 00:13:14.901 "block_size": 512, 00:13:14.901 "num_blocks": 65536, 00:13:14.901 "uuid": "056ae784-d633-4c76-8ede-6da2762a78d1", 00:13:14.901 "assigned_rate_limits": { 00:13:14.901 "rw_ios_per_sec": 0, 00:13:14.901 "rw_mbytes_per_sec": 0, 00:13:14.901 "r_mbytes_per_sec": 0, 00:13:14.901 "w_mbytes_per_sec": 0 00:13:14.901 }, 00:13:14.901 "claimed": true, 00:13:14.901 "claim_type": "exclusive_write", 00:13:14.902 "zoned": false, 00:13:14.902 "supported_io_types": { 00:13:14.902 "read": true, 00:13:14.902 "write": true, 00:13:14.902 "unmap": true, 00:13:14.902 "flush": true, 00:13:14.902 "reset": true, 00:13:14.902 "nvme_admin": false, 00:13:14.902 "nvme_io": false, 00:13:14.902 "nvme_io_md": false, 00:13:14.902 "write_zeroes": true, 00:13:14.902 "zcopy": true, 00:13:14.902 "get_zone_info": false, 00:13:14.902 "zone_management": false, 00:13:14.902 "zone_append": false, 00:13:14.902 "compare": false, 00:13:14.902 "compare_and_write": false, 00:13:14.902 "abort": true, 00:13:14.902 "seek_hole": false, 00:13:14.902 "seek_data": false, 00:13:14.902 "copy": true, 00:13:14.902 "nvme_iov_md": false 00:13:14.902 }, 00:13:14.902 "memory_domains": [ 00:13:14.902 { 00:13:14.902 "dma_device_id": "system", 00:13:14.902 "dma_device_type": 1 00:13:14.902 }, 00:13:14.902 { 00:13:14.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.902 "dma_device_type": 2 00:13:14.902 } 00:13:14.902 ], 00:13:14.902 "driver_specific": {} 00:13:14.902 } 00:13:14.902 ] 00:13:14.902 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.902 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:14.902 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:14.902 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:14.902 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:14.902 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.902 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.902 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.902 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.902 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:14.902 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.902 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.902 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.902 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.902 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.902 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.902 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.902 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.902 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.902 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.902 "name": "Existed_Raid", 00:13:14.902 "uuid": "3217ecf4-4a0a-44fc-a2be-dc35e7e74261", 00:13:14.902 "strip_size_kb": 0, 00:13:14.902 "state": "configuring", 00:13:14.902 "raid_level": "raid1", 00:13:14.902 "superblock": true, 00:13:14.902 "num_base_bdevs": 4, 00:13:14.902 "num_base_bdevs_discovered": 3, 00:13:14.902 "num_base_bdevs_operational": 4, 00:13:14.902 "base_bdevs_list": [ 00:13:14.902 { 00:13:14.902 "name": "BaseBdev1", 00:13:14.902 "uuid": "1a9adc04-2fd3-478c-ba1f-9966bff25637", 00:13:14.902 "is_configured": true, 00:13:14.902 "data_offset": 2048, 00:13:14.902 "data_size": 63488 00:13:14.902 }, 00:13:14.902 { 00:13:14.902 "name": "BaseBdev2", 00:13:14.902 "uuid": "654425b2-ba0a-4b5f-93e3-1a0ca1bc87cf", 00:13:14.902 "is_configured": true, 00:13:14.902 "data_offset": 2048, 00:13:14.902 "data_size": 63488 00:13:14.902 }, 00:13:14.902 { 00:13:14.902 "name": "BaseBdev3", 00:13:14.902 "uuid": "056ae784-d633-4c76-8ede-6da2762a78d1", 00:13:14.902 "is_configured": true, 00:13:14.902 "data_offset": 2048, 00:13:14.902 "data_size": 63488 00:13:14.902 }, 00:13:14.902 { 00:13:14.902 "name": "BaseBdev4", 00:13:14.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.902 "is_configured": false, 00:13:14.902 "data_offset": 0, 00:13:14.902 "data_size": 0 00:13:14.902 } 00:13:14.902 ] 00:13:14.902 }' 00:13:14.902 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.902 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.470 [2024-11-20 11:22:58.385194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:15.470 [2024-11-20 11:22:58.385608] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:15.470 [2024-11-20 11:22:58.385666] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:15.470 [2024-11-20 11:22:58.386014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:15.470 BaseBdev4 00:13:15.470 [2024-11-20 11:22:58.386223] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:15.470 [2024-11-20 11:22:58.386239] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:15.470 [2024-11-20 11:22:58.386389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.470 [ 00:13:15.470 { 00:13:15.470 "name": "BaseBdev4", 00:13:15.470 "aliases": [ 00:13:15.470 "9b77acad-9788-471f-b05a-b257997642e1" 00:13:15.470 ], 00:13:15.470 "product_name": "Malloc disk", 00:13:15.470 "block_size": 512, 00:13:15.470 "num_blocks": 65536, 00:13:15.470 "uuid": "9b77acad-9788-471f-b05a-b257997642e1", 00:13:15.470 "assigned_rate_limits": { 00:13:15.470 "rw_ios_per_sec": 0, 00:13:15.470 "rw_mbytes_per_sec": 0, 00:13:15.470 "r_mbytes_per_sec": 0, 00:13:15.470 "w_mbytes_per_sec": 0 00:13:15.470 }, 00:13:15.470 "claimed": true, 00:13:15.470 "claim_type": "exclusive_write", 00:13:15.470 "zoned": false, 00:13:15.470 "supported_io_types": { 00:13:15.470 "read": true, 00:13:15.470 "write": true, 00:13:15.470 "unmap": true, 00:13:15.470 "flush": true, 00:13:15.470 "reset": true, 00:13:15.470 "nvme_admin": false, 00:13:15.470 "nvme_io": false, 00:13:15.470 "nvme_io_md": false, 00:13:15.470 "write_zeroes": true, 00:13:15.470 "zcopy": true, 00:13:15.470 "get_zone_info": false, 00:13:15.470 "zone_management": false, 00:13:15.470 "zone_append": false, 00:13:15.470 "compare": false, 00:13:15.470 "compare_and_write": false, 00:13:15.470 "abort": true, 00:13:15.470 "seek_hole": false, 00:13:15.470 "seek_data": false, 00:13:15.470 "copy": true, 00:13:15.470 "nvme_iov_md": false 00:13:15.470 }, 00:13:15.470 "memory_domains": [ 00:13:15.470 { 00:13:15.470 "dma_device_id": "system", 00:13:15.470 "dma_device_type": 1 00:13:15.470 }, 00:13:15.470 { 00:13:15.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.470 "dma_device_type": 2 00:13:15.470 } 00:13:15.470 ], 00:13:15.470 "driver_specific": {} 00:13:15.470 } 00:13:15.470 ] 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.470 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.470 "name": "Existed_Raid", 00:13:15.470 "uuid": "3217ecf4-4a0a-44fc-a2be-dc35e7e74261", 00:13:15.470 "strip_size_kb": 0, 00:13:15.470 "state": "online", 00:13:15.470 "raid_level": "raid1", 00:13:15.470 "superblock": true, 00:13:15.470 "num_base_bdevs": 4, 00:13:15.470 "num_base_bdevs_discovered": 4, 00:13:15.470 "num_base_bdevs_operational": 4, 00:13:15.470 "base_bdevs_list": [ 00:13:15.470 { 00:13:15.470 "name": "BaseBdev1", 00:13:15.470 "uuid": "1a9adc04-2fd3-478c-ba1f-9966bff25637", 00:13:15.470 "is_configured": true, 00:13:15.470 "data_offset": 2048, 00:13:15.470 "data_size": 63488 00:13:15.470 }, 00:13:15.470 { 00:13:15.470 "name": "BaseBdev2", 00:13:15.470 "uuid": "654425b2-ba0a-4b5f-93e3-1a0ca1bc87cf", 00:13:15.470 "is_configured": true, 00:13:15.470 "data_offset": 2048, 00:13:15.470 "data_size": 63488 00:13:15.470 }, 00:13:15.470 { 00:13:15.470 "name": "BaseBdev3", 00:13:15.470 "uuid": "056ae784-d633-4c76-8ede-6da2762a78d1", 00:13:15.470 "is_configured": true, 00:13:15.470 "data_offset": 2048, 00:13:15.470 "data_size": 63488 00:13:15.470 }, 00:13:15.470 { 00:13:15.470 "name": "BaseBdev4", 00:13:15.470 "uuid": "9b77acad-9788-471f-b05a-b257997642e1", 00:13:15.470 "is_configured": true, 00:13:15.470 "data_offset": 2048, 00:13:15.470 "data_size": 63488 00:13:15.470 } 00:13:15.470 ] 00:13:15.471 }' 00:13:15.471 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.471 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.041 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:16.041 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:16.041 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:16.041 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:16.041 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:16.041 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:16.041 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:16.041 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:16.041 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.041 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.041 [2024-11-20 11:22:58.876837] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:16.041 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.041 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:16.041 "name": "Existed_Raid", 00:13:16.041 "aliases": [ 00:13:16.041 "3217ecf4-4a0a-44fc-a2be-dc35e7e74261" 00:13:16.041 ], 00:13:16.041 "product_name": "Raid Volume", 00:13:16.041 "block_size": 512, 00:13:16.041 "num_blocks": 63488, 00:13:16.041 "uuid": "3217ecf4-4a0a-44fc-a2be-dc35e7e74261", 00:13:16.041 "assigned_rate_limits": { 00:13:16.041 "rw_ios_per_sec": 0, 00:13:16.041 "rw_mbytes_per_sec": 0, 00:13:16.041 "r_mbytes_per_sec": 0, 00:13:16.041 "w_mbytes_per_sec": 0 00:13:16.041 }, 00:13:16.041 "claimed": false, 00:13:16.041 "zoned": false, 00:13:16.041 "supported_io_types": { 00:13:16.041 "read": true, 00:13:16.041 "write": true, 00:13:16.041 "unmap": false, 00:13:16.041 "flush": false, 00:13:16.041 "reset": true, 00:13:16.041 "nvme_admin": false, 00:13:16.041 "nvme_io": false, 00:13:16.041 "nvme_io_md": false, 00:13:16.041 "write_zeroes": true, 00:13:16.041 "zcopy": false, 00:13:16.041 "get_zone_info": false, 00:13:16.041 "zone_management": false, 00:13:16.041 "zone_append": false, 00:13:16.041 "compare": false, 00:13:16.041 "compare_and_write": false, 00:13:16.041 "abort": false, 00:13:16.041 "seek_hole": false, 00:13:16.041 "seek_data": false, 00:13:16.041 "copy": false, 00:13:16.041 "nvme_iov_md": false 00:13:16.041 }, 00:13:16.041 "memory_domains": [ 00:13:16.041 { 00:13:16.041 "dma_device_id": "system", 00:13:16.041 "dma_device_type": 1 00:13:16.041 }, 00:13:16.041 { 00:13:16.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.041 "dma_device_type": 2 00:13:16.041 }, 00:13:16.041 { 00:13:16.041 "dma_device_id": "system", 00:13:16.041 "dma_device_type": 1 00:13:16.041 }, 00:13:16.041 { 00:13:16.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.041 "dma_device_type": 2 00:13:16.041 }, 00:13:16.041 { 00:13:16.041 "dma_device_id": "system", 00:13:16.041 "dma_device_type": 1 00:13:16.041 }, 00:13:16.041 { 00:13:16.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.041 "dma_device_type": 2 00:13:16.041 }, 00:13:16.041 { 00:13:16.041 "dma_device_id": "system", 00:13:16.041 "dma_device_type": 1 00:13:16.041 }, 00:13:16.041 { 00:13:16.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.041 "dma_device_type": 2 00:13:16.041 } 00:13:16.041 ], 00:13:16.041 "driver_specific": { 00:13:16.041 "raid": { 00:13:16.041 "uuid": "3217ecf4-4a0a-44fc-a2be-dc35e7e74261", 00:13:16.041 "strip_size_kb": 0, 00:13:16.041 "state": "online", 00:13:16.041 "raid_level": "raid1", 00:13:16.041 "superblock": true, 00:13:16.041 "num_base_bdevs": 4, 00:13:16.041 "num_base_bdevs_discovered": 4, 00:13:16.041 "num_base_bdevs_operational": 4, 00:13:16.041 "base_bdevs_list": [ 00:13:16.041 { 00:13:16.041 "name": "BaseBdev1", 00:13:16.041 "uuid": "1a9adc04-2fd3-478c-ba1f-9966bff25637", 00:13:16.041 "is_configured": true, 00:13:16.041 "data_offset": 2048, 00:13:16.041 "data_size": 63488 00:13:16.041 }, 00:13:16.041 { 00:13:16.041 "name": "BaseBdev2", 00:13:16.041 "uuid": "654425b2-ba0a-4b5f-93e3-1a0ca1bc87cf", 00:13:16.041 "is_configured": true, 00:13:16.041 "data_offset": 2048, 00:13:16.041 "data_size": 63488 00:13:16.041 }, 00:13:16.041 { 00:13:16.041 "name": "BaseBdev3", 00:13:16.041 "uuid": "056ae784-d633-4c76-8ede-6da2762a78d1", 00:13:16.041 "is_configured": true, 00:13:16.041 "data_offset": 2048, 00:13:16.041 "data_size": 63488 00:13:16.041 }, 00:13:16.041 { 00:13:16.041 "name": "BaseBdev4", 00:13:16.041 "uuid": "9b77acad-9788-471f-b05a-b257997642e1", 00:13:16.041 "is_configured": true, 00:13:16.041 "data_offset": 2048, 00:13:16.041 "data_size": 63488 00:13:16.041 } 00:13:16.041 ] 00:13:16.041 } 00:13:16.041 } 00:13:16.041 }' 00:13:16.041 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:16.041 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:16.041 BaseBdev2 00:13:16.041 BaseBdev3 00:13:16.041 BaseBdev4' 00:13:16.041 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.041 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:16.041 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.041 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.041 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:16.041 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.041 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.041 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.041 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.041 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.041 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.041 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:16.041 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.041 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.041 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.041 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.041 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.041 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.041 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.041 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:16.041 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.041 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.041 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.041 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.041 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.041 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.041 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.041 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:16.041 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.041 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.042 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.309 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.309 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.309 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.309 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:16.309 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.309 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.309 [2024-11-20 11:22:59.203970] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:16.309 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.309 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:16.309 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:16.309 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:16.309 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:16.309 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:16.309 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:16.309 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.310 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.310 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.310 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.310 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.310 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.310 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.310 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.310 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.310 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.310 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.310 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.310 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.310 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.310 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.310 "name": "Existed_Raid", 00:13:16.310 "uuid": "3217ecf4-4a0a-44fc-a2be-dc35e7e74261", 00:13:16.310 "strip_size_kb": 0, 00:13:16.310 "state": "online", 00:13:16.310 "raid_level": "raid1", 00:13:16.310 "superblock": true, 00:13:16.310 "num_base_bdevs": 4, 00:13:16.310 "num_base_bdevs_discovered": 3, 00:13:16.310 "num_base_bdevs_operational": 3, 00:13:16.310 "base_bdevs_list": [ 00:13:16.310 { 00:13:16.310 "name": null, 00:13:16.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.310 "is_configured": false, 00:13:16.310 "data_offset": 0, 00:13:16.310 "data_size": 63488 00:13:16.310 }, 00:13:16.310 { 00:13:16.310 "name": "BaseBdev2", 00:13:16.310 "uuid": "654425b2-ba0a-4b5f-93e3-1a0ca1bc87cf", 00:13:16.310 "is_configured": true, 00:13:16.310 "data_offset": 2048, 00:13:16.310 "data_size": 63488 00:13:16.310 }, 00:13:16.310 { 00:13:16.311 "name": "BaseBdev3", 00:13:16.311 "uuid": "056ae784-d633-4c76-8ede-6da2762a78d1", 00:13:16.311 "is_configured": true, 00:13:16.311 "data_offset": 2048, 00:13:16.311 "data_size": 63488 00:13:16.311 }, 00:13:16.311 { 00:13:16.311 "name": "BaseBdev4", 00:13:16.311 "uuid": "9b77acad-9788-471f-b05a-b257997642e1", 00:13:16.311 "is_configured": true, 00:13:16.311 "data_offset": 2048, 00:13:16.311 "data_size": 63488 00:13:16.311 } 00:13:16.311 ] 00:13:16.311 }' 00:13:16.311 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.311 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.879 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:16.879 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:16.879 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:16.879 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.879 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.879 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.879 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.879 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:16.879 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:16.879 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:16.879 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.879 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.879 [2024-11-20 11:22:59.782633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:16.879 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.879 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:16.879 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:16.879 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.879 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:16.879 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.879 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.879 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.879 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:16.879 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:16.879 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:16.880 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.880 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.880 [2024-11-20 11:22:59.933432] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.139 [2024-11-20 11:23:00.085659] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:17.139 [2024-11-20 11:23:00.085836] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:17.139 [2024-11-20 11:23:00.182767] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:17.139 [2024-11-20 11:23:00.182929] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:17.139 [2024-11-20 11:23:00.182982] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.139 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.407 BaseBdev2 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.407 [ 00:13:17.407 { 00:13:17.407 "name": "BaseBdev2", 00:13:17.407 "aliases": [ 00:13:17.407 "7853846b-0720-4ac6-9595-a6f774dc63a0" 00:13:17.407 ], 00:13:17.407 "product_name": "Malloc disk", 00:13:17.407 "block_size": 512, 00:13:17.407 "num_blocks": 65536, 00:13:17.407 "uuid": "7853846b-0720-4ac6-9595-a6f774dc63a0", 00:13:17.407 "assigned_rate_limits": { 00:13:17.407 "rw_ios_per_sec": 0, 00:13:17.407 "rw_mbytes_per_sec": 0, 00:13:17.407 "r_mbytes_per_sec": 0, 00:13:17.407 "w_mbytes_per_sec": 0 00:13:17.407 }, 00:13:17.407 "claimed": false, 00:13:17.407 "zoned": false, 00:13:17.407 "supported_io_types": { 00:13:17.407 "read": true, 00:13:17.407 "write": true, 00:13:17.407 "unmap": true, 00:13:17.407 "flush": true, 00:13:17.407 "reset": true, 00:13:17.407 "nvme_admin": false, 00:13:17.407 "nvme_io": false, 00:13:17.407 "nvme_io_md": false, 00:13:17.407 "write_zeroes": true, 00:13:17.407 "zcopy": true, 00:13:17.407 "get_zone_info": false, 00:13:17.407 "zone_management": false, 00:13:17.407 "zone_append": false, 00:13:17.407 "compare": false, 00:13:17.407 "compare_and_write": false, 00:13:17.407 "abort": true, 00:13:17.407 "seek_hole": false, 00:13:17.407 "seek_data": false, 00:13:17.407 "copy": true, 00:13:17.407 "nvme_iov_md": false 00:13:17.407 }, 00:13:17.407 "memory_domains": [ 00:13:17.407 { 00:13:17.407 "dma_device_id": "system", 00:13:17.407 "dma_device_type": 1 00:13:17.407 }, 00:13:17.407 { 00:13:17.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.407 "dma_device_type": 2 00:13:17.407 } 00:13:17.407 ], 00:13:17.407 "driver_specific": {} 00:13:17.407 } 00:13:17.407 ] 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.407 BaseBdev3 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.407 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.407 [ 00:13:17.407 { 00:13:17.407 "name": "BaseBdev3", 00:13:17.407 "aliases": [ 00:13:17.407 "3dd928cd-f02e-45b4-9be0-7c9fbc8f59a4" 00:13:17.407 ], 00:13:17.407 "product_name": "Malloc disk", 00:13:17.407 "block_size": 512, 00:13:17.407 "num_blocks": 65536, 00:13:17.407 "uuid": "3dd928cd-f02e-45b4-9be0-7c9fbc8f59a4", 00:13:17.407 "assigned_rate_limits": { 00:13:17.407 "rw_ios_per_sec": 0, 00:13:17.407 "rw_mbytes_per_sec": 0, 00:13:17.407 "r_mbytes_per_sec": 0, 00:13:17.407 "w_mbytes_per_sec": 0 00:13:17.407 }, 00:13:17.407 "claimed": false, 00:13:17.407 "zoned": false, 00:13:17.407 "supported_io_types": { 00:13:17.407 "read": true, 00:13:17.407 "write": true, 00:13:17.407 "unmap": true, 00:13:17.407 "flush": true, 00:13:17.407 "reset": true, 00:13:17.407 "nvme_admin": false, 00:13:17.407 "nvme_io": false, 00:13:17.407 "nvme_io_md": false, 00:13:17.407 "write_zeroes": true, 00:13:17.407 "zcopy": true, 00:13:17.407 "get_zone_info": false, 00:13:17.407 "zone_management": false, 00:13:17.407 "zone_append": false, 00:13:17.407 "compare": false, 00:13:17.407 "compare_and_write": false, 00:13:17.407 "abort": true, 00:13:17.407 "seek_hole": false, 00:13:17.407 "seek_data": false, 00:13:17.407 "copy": true, 00:13:17.407 "nvme_iov_md": false 00:13:17.407 }, 00:13:17.407 "memory_domains": [ 00:13:17.407 { 00:13:17.407 "dma_device_id": "system", 00:13:17.407 "dma_device_type": 1 00:13:17.407 }, 00:13:17.407 { 00:13:17.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.408 "dma_device_type": 2 00:13:17.408 } 00:13:17.408 ], 00:13:17.408 "driver_specific": {} 00:13:17.408 } 00:13:17.408 ] 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.408 BaseBdev4 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.408 [ 00:13:17.408 { 00:13:17.408 "name": "BaseBdev4", 00:13:17.408 "aliases": [ 00:13:17.408 "e56d7772-2b46-43ca-b403-7380416764a1" 00:13:17.408 ], 00:13:17.408 "product_name": "Malloc disk", 00:13:17.408 "block_size": 512, 00:13:17.408 "num_blocks": 65536, 00:13:17.408 "uuid": "e56d7772-2b46-43ca-b403-7380416764a1", 00:13:17.408 "assigned_rate_limits": { 00:13:17.408 "rw_ios_per_sec": 0, 00:13:17.408 "rw_mbytes_per_sec": 0, 00:13:17.408 "r_mbytes_per_sec": 0, 00:13:17.408 "w_mbytes_per_sec": 0 00:13:17.408 }, 00:13:17.408 "claimed": false, 00:13:17.408 "zoned": false, 00:13:17.408 "supported_io_types": { 00:13:17.408 "read": true, 00:13:17.408 "write": true, 00:13:17.408 "unmap": true, 00:13:17.408 "flush": true, 00:13:17.408 "reset": true, 00:13:17.408 "nvme_admin": false, 00:13:17.408 "nvme_io": false, 00:13:17.408 "nvme_io_md": false, 00:13:17.408 "write_zeroes": true, 00:13:17.408 "zcopy": true, 00:13:17.408 "get_zone_info": false, 00:13:17.408 "zone_management": false, 00:13:17.408 "zone_append": false, 00:13:17.408 "compare": false, 00:13:17.408 "compare_and_write": false, 00:13:17.408 "abort": true, 00:13:17.408 "seek_hole": false, 00:13:17.408 "seek_data": false, 00:13:17.408 "copy": true, 00:13:17.408 "nvme_iov_md": false 00:13:17.408 }, 00:13:17.408 "memory_domains": [ 00:13:17.408 { 00:13:17.408 "dma_device_id": "system", 00:13:17.408 "dma_device_type": 1 00:13:17.408 }, 00:13:17.408 { 00:13:17.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.408 "dma_device_type": 2 00:13:17.408 } 00:13:17.408 ], 00:13:17.408 "driver_specific": {} 00:13:17.408 } 00:13:17.408 ] 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.408 [2024-11-20 11:23:00.497840] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:17.408 [2024-11-20 11:23:00.497951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:17.408 [2024-11-20 11:23:00.498016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.408 [2024-11-20 11:23:00.500092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:17.408 [2024-11-20 11:23:00.500187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.408 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.666 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.666 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.666 "name": "Existed_Raid", 00:13:17.666 "uuid": "2387e2c6-ec1d-4c46-a61c-83ae2394b79f", 00:13:17.666 "strip_size_kb": 0, 00:13:17.666 "state": "configuring", 00:13:17.666 "raid_level": "raid1", 00:13:17.666 "superblock": true, 00:13:17.666 "num_base_bdevs": 4, 00:13:17.667 "num_base_bdevs_discovered": 3, 00:13:17.667 "num_base_bdevs_operational": 4, 00:13:17.667 "base_bdevs_list": [ 00:13:17.667 { 00:13:17.667 "name": "BaseBdev1", 00:13:17.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.667 "is_configured": false, 00:13:17.667 "data_offset": 0, 00:13:17.667 "data_size": 0 00:13:17.667 }, 00:13:17.667 { 00:13:17.667 "name": "BaseBdev2", 00:13:17.667 "uuid": "7853846b-0720-4ac6-9595-a6f774dc63a0", 00:13:17.667 "is_configured": true, 00:13:17.667 "data_offset": 2048, 00:13:17.667 "data_size": 63488 00:13:17.667 }, 00:13:17.667 { 00:13:17.667 "name": "BaseBdev3", 00:13:17.667 "uuid": "3dd928cd-f02e-45b4-9be0-7c9fbc8f59a4", 00:13:17.667 "is_configured": true, 00:13:17.667 "data_offset": 2048, 00:13:17.667 "data_size": 63488 00:13:17.667 }, 00:13:17.667 { 00:13:17.667 "name": "BaseBdev4", 00:13:17.667 "uuid": "e56d7772-2b46-43ca-b403-7380416764a1", 00:13:17.667 "is_configured": true, 00:13:17.667 "data_offset": 2048, 00:13:17.667 "data_size": 63488 00:13:17.667 } 00:13:17.667 ] 00:13:17.667 }' 00:13:17.667 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.667 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.926 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:17.926 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.926 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.926 [2024-11-20 11:23:00.897207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:17.926 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.926 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:17.926 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.926 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.926 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.926 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.926 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.926 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.926 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.926 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.926 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.926 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.926 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.926 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.926 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.926 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.926 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.926 "name": "Existed_Raid", 00:13:17.926 "uuid": "2387e2c6-ec1d-4c46-a61c-83ae2394b79f", 00:13:17.926 "strip_size_kb": 0, 00:13:17.926 "state": "configuring", 00:13:17.926 "raid_level": "raid1", 00:13:17.926 "superblock": true, 00:13:17.926 "num_base_bdevs": 4, 00:13:17.926 "num_base_bdevs_discovered": 2, 00:13:17.926 "num_base_bdevs_operational": 4, 00:13:17.926 "base_bdevs_list": [ 00:13:17.926 { 00:13:17.926 "name": "BaseBdev1", 00:13:17.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.926 "is_configured": false, 00:13:17.927 "data_offset": 0, 00:13:17.927 "data_size": 0 00:13:17.927 }, 00:13:17.927 { 00:13:17.927 "name": null, 00:13:17.927 "uuid": "7853846b-0720-4ac6-9595-a6f774dc63a0", 00:13:17.927 "is_configured": false, 00:13:17.927 "data_offset": 0, 00:13:17.927 "data_size": 63488 00:13:17.927 }, 00:13:17.927 { 00:13:17.927 "name": "BaseBdev3", 00:13:17.927 "uuid": "3dd928cd-f02e-45b4-9be0-7c9fbc8f59a4", 00:13:17.927 "is_configured": true, 00:13:17.927 "data_offset": 2048, 00:13:17.927 "data_size": 63488 00:13:17.927 }, 00:13:17.927 { 00:13:17.927 "name": "BaseBdev4", 00:13:17.927 "uuid": "e56d7772-2b46-43ca-b403-7380416764a1", 00:13:17.927 "is_configured": true, 00:13:17.927 "data_offset": 2048, 00:13:17.927 "data_size": 63488 00:13:17.927 } 00:13:17.927 ] 00:13:17.927 }' 00:13:17.927 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.927 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.494 11:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.494 11:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:18.494 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.494 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.494 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.494 11:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:18.494 11:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:18.494 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.494 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.494 [2024-11-20 11:23:01.457099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:18.494 BaseBdev1 00:13:18.494 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.494 11:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:18.494 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:18.494 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:18.494 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:18.494 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:18.494 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:18.494 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:18.494 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.494 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.494 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.494 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:18.494 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.494 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.494 [ 00:13:18.494 { 00:13:18.494 "name": "BaseBdev1", 00:13:18.494 "aliases": [ 00:13:18.494 "70bf0fdb-c70e-44b3-bdea-f26f71bdd8d3" 00:13:18.494 ], 00:13:18.494 "product_name": "Malloc disk", 00:13:18.494 "block_size": 512, 00:13:18.494 "num_blocks": 65536, 00:13:18.494 "uuid": "70bf0fdb-c70e-44b3-bdea-f26f71bdd8d3", 00:13:18.494 "assigned_rate_limits": { 00:13:18.494 "rw_ios_per_sec": 0, 00:13:18.494 "rw_mbytes_per_sec": 0, 00:13:18.494 "r_mbytes_per_sec": 0, 00:13:18.494 "w_mbytes_per_sec": 0 00:13:18.494 }, 00:13:18.494 "claimed": true, 00:13:18.494 "claim_type": "exclusive_write", 00:13:18.494 "zoned": false, 00:13:18.494 "supported_io_types": { 00:13:18.494 "read": true, 00:13:18.494 "write": true, 00:13:18.494 "unmap": true, 00:13:18.494 "flush": true, 00:13:18.494 "reset": true, 00:13:18.494 "nvme_admin": false, 00:13:18.494 "nvme_io": false, 00:13:18.494 "nvme_io_md": false, 00:13:18.494 "write_zeroes": true, 00:13:18.494 "zcopy": true, 00:13:18.494 "get_zone_info": false, 00:13:18.494 "zone_management": false, 00:13:18.494 "zone_append": false, 00:13:18.494 "compare": false, 00:13:18.494 "compare_and_write": false, 00:13:18.494 "abort": true, 00:13:18.494 "seek_hole": false, 00:13:18.494 "seek_data": false, 00:13:18.494 "copy": true, 00:13:18.494 "nvme_iov_md": false 00:13:18.495 }, 00:13:18.495 "memory_domains": [ 00:13:18.495 { 00:13:18.495 "dma_device_id": "system", 00:13:18.495 "dma_device_type": 1 00:13:18.495 }, 00:13:18.495 { 00:13:18.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.495 "dma_device_type": 2 00:13:18.495 } 00:13:18.495 ], 00:13:18.495 "driver_specific": {} 00:13:18.495 } 00:13:18.495 ] 00:13:18.495 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.495 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:18.495 11:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:18.495 11:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.495 11:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.495 11:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.495 11:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.495 11:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:18.495 11:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.495 11:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.495 11:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.495 11:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.495 11:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.495 11:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.495 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.495 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.495 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.495 11:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.495 "name": "Existed_Raid", 00:13:18.495 "uuid": "2387e2c6-ec1d-4c46-a61c-83ae2394b79f", 00:13:18.495 "strip_size_kb": 0, 00:13:18.495 "state": "configuring", 00:13:18.495 "raid_level": "raid1", 00:13:18.495 "superblock": true, 00:13:18.495 "num_base_bdevs": 4, 00:13:18.495 "num_base_bdevs_discovered": 3, 00:13:18.495 "num_base_bdevs_operational": 4, 00:13:18.495 "base_bdevs_list": [ 00:13:18.495 { 00:13:18.495 "name": "BaseBdev1", 00:13:18.495 "uuid": "70bf0fdb-c70e-44b3-bdea-f26f71bdd8d3", 00:13:18.495 "is_configured": true, 00:13:18.495 "data_offset": 2048, 00:13:18.495 "data_size": 63488 00:13:18.495 }, 00:13:18.495 { 00:13:18.495 "name": null, 00:13:18.495 "uuid": "7853846b-0720-4ac6-9595-a6f774dc63a0", 00:13:18.495 "is_configured": false, 00:13:18.495 "data_offset": 0, 00:13:18.495 "data_size": 63488 00:13:18.495 }, 00:13:18.495 { 00:13:18.495 "name": "BaseBdev3", 00:13:18.495 "uuid": "3dd928cd-f02e-45b4-9be0-7c9fbc8f59a4", 00:13:18.495 "is_configured": true, 00:13:18.495 "data_offset": 2048, 00:13:18.495 "data_size": 63488 00:13:18.495 }, 00:13:18.495 { 00:13:18.495 "name": "BaseBdev4", 00:13:18.495 "uuid": "e56d7772-2b46-43ca-b403-7380416764a1", 00:13:18.495 "is_configured": true, 00:13:18.495 "data_offset": 2048, 00:13:18.495 "data_size": 63488 00:13:18.495 } 00:13:18.495 ] 00:13:18.495 }' 00:13:18.495 11:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.495 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.073 11:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.073 11:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:19.073 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.073 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.073 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.073 11:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:19.073 11:23:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:19.073 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.073 11:23:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.073 [2024-11-20 11:23:02.000323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:19.073 11:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.073 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:19.073 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.073 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.073 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.073 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.073 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.073 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.073 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.073 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.073 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.073 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.073 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.073 11:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.073 11:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.073 11:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.073 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.073 "name": "Existed_Raid", 00:13:19.073 "uuid": "2387e2c6-ec1d-4c46-a61c-83ae2394b79f", 00:13:19.073 "strip_size_kb": 0, 00:13:19.073 "state": "configuring", 00:13:19.073 "raid_level": "raid1", 00:13:19.073 "superblock": true, 00:13:19.073 "num_base_bdevs": 4, 00:13:19.073 "num_base_bdevs_discovered": 2, 00:13:19.073 "num_base_bdevs_operational": 4, 00:13:19.073 "base_bdevs_list": [ 00:13:19.073 { 00:13:19.073 "name": "BaseBdev1", 00:13:19.073 "uuid": "70bf0fdb-c70e-44b3-bdea-f26f71bdd8d3", 00:13:19.073 "is_configured": true, 00:13:19.073 "data_offset": 2048, 00:13:19.073 "data_size": 63488 00:13:19.073 }, 00:13:19.073 { 00:13:19.073 "name": null, 00:13:19.073 "uuid": "7853846b-0720-4ac6-9595-a6f774dc63a0", 00:13:19.073 "is_configured": false, 00:13:19.073 "data_offset": 0, 00:13:19.073 "data_size": 63488 00:13:19.073 }, 00:13:19.073 { 00:13:19.073 "name": null, 00:13:19.073 "uuid": "3dd928cd-f02e-45b4-9be0-7c9fbc8f59a4", 00:13:19.073 "is_configured": false, 00:13:19.073 "data_offset": 0, 00:13:19.073 "data_size": 63488 00:13:19.073 }, 00:13:19.073 { 00:13:19.073 "name": "BaseBdev4", 00:13:19.073 "uuid": "e56d7772-2b46-43ca-b403-7380416764a1", 00:13:19.073 "is_configured": true, 00:13:19.073 "data_offset": 2048, 00:13:19.073 "data_size": 63488 00:13:19.073 } 00:13:19.073 ] 00:13:19.073 }' 00:13:19.073 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.073 11:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.332 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:19.332 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.332 11:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.332 11:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.591 11:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.591 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:19.591 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:19.591 11:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.591 11:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.591 [2024-11-20 11:23:02.471557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:19.591 11:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.591 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:19.591 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.591 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.591 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.591 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.591 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.591 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.591 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.591 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.591 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.591 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.591 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.591 11:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.591 11:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.591 11:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.591 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.591 "name": "Existed_Raid", 00:13:19.591 "uuid": "2387e2c6-ec1d-4c46-a61c-83ae2394b79f", 00:13:19.591 "strip_size_kb": 0, 00:13:19.591 "state": "configuring", 00:13:19.591 "raid_level": "raid1", 00:13:19.591 "superblock": true, 00:13:19.591 "num_base_bdevs": 4, 00:13:19.591 "num_base_bdevs_discovered": 3, 00:13:19.591 "num_base_bdevs_operational": 4, 00:13:19.591 "base_bdevs_list": [ 00:13:19.591 { 00:13:19.591 "name": "BaseBdev1", 00:13:19.591 "uuid": "70bf0fdb-c70e-44b3-bdea-f26f71bdd8d3", 00:13:19.591 "is_configured": true, 00:13:19.591 "data_offset": 2048, 00:13:19.591 "data_size": 63488 00:13:19.591 }, 00:13:19.591 { 00:13:19.591 "name": null, 00:13:19.591 "uuid": "7853846b-0720-4ac6-9595-a6f774dc63a0", 00:13:19.591 "is_configured": false, 00:13:19.591 "data_offset": 0, 00:13:19.591 "data_size": 63488 00:13:19.591 }, 00:13:19.591 { 00:13:19.591 "name": "BaseBdev3", 00:13:19.591 "uuid": "3dd928cd-f02e-45b4-9be0-7c9fbc8f59a4", 00:13:19.591 "is_configured": true, 00:13:19.591 "data_offset": 2048, 00:13:19.591 "data_size": 63488 00:13:19.591 }, 00:13:19.591 { 00:13:19.591 "name": "BaseBdev4", 00:13:19.591 "uuid": "e56d7772-2b46-43ca-b403-7380416764a1", 00:13:19.591 "is_configured": true, 00:13:19.591 "data_offset": 2048, 00:13:19.591 "data_size": 63488 00:13:19.591 } 00:13:19.591 ] 00:13:19.591 }' 00:13:19.591 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.591 11:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.850 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.850 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:19.850 11:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.850 11:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.850 11:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.109 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:20.109 11:23:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:20.109 11:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.109 11:23:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.109 [2024-11-20 11:23:02.982714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:20.109 11:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.109 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:20.109 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.109 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.109 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.109 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.109 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:20.109 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.109 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.109 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.109 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.109 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.109 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.109 11:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.109 11:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.109 11:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.110 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.110 "name": "Existed_Raid", 00:13:20.110 "uuid": "2387e2c6-ec1d-4c46-a61c-83ae2394b79f", 00:13:20.110 "strip_size_kb": 0, 00:13:20.110 "state": "configuring", 00:13:20.110 "raid_level": "raid1", 00:13:20.110 "superblock": true, 00:13:20.110 "num_base_bdevs": 4, 00:13:20.110 "num_base_bdevs_discovered": 2, 00:13:20.110 "num_base_bdevs_operational": 4, 00:13:20.110 "base_bdevs_list": [ 00:13:20.110 { 00:13:20.110 "name": null, 00:13:20.110 "uuid": "70bf0fdb-c70e-44b3-bdea-f26f71bdd8d3", 00:13:20.110 "is_configured": false, 00:13:20.110 "data_offset": 0, 00:13:20.110 "data_size": 63488 00:13:20.110 }, 00:13:20.110 { 00:13:20.110 "name": null, 00:13:20.110 "uuid": "7853846b-0720-4ac6-9595-a6f774dc63a0", 00:13:20.110 "is_configured": false, 00:13:20.110 "data_offset": 0, 00:13:20.110 "data_size": 63488 00:13:20.110 }, 00:13:20.110 { 00:13:20.110 "name": "BaseBdev3", 00:13:20.110 "uuid": "3dd928cd-f02e-45b4-9be0-7c9fbc8f59a4", 00:13:20.110 "is_configured": true, 00:13:20.110 "data_offset": 2048, 00:13:20.110 "data_size": 63488 00:13:20.110 }, 00:13:20.110 { 00:13:20.110 "name": "BaseBdev4", 00:13:20.110 "uuid": "e56d7772-2b46-43ca-b403-7380416764a1", 00:13:20.110 "is_configured": true, 00:13:20.110 "data_offset": 2048, 00:13:20.110 "data_size": 63488 00:13:20.110 } 00:13:20.110 ] 00:13:20.110 }' 00:13:20.110 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.110 11:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.679 [2024-11-20 11:23:03.623989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.679 "name": "Existed_Raid", 00:13:20.679 "uuid": "2387e2c6-ec1d-4c46-a61c-83ae2394b79f", 00:13:20.679 "strip_size_kb": 0, 00:13:20.679 "state": "configuring", 00:13:20.679 "raid_level": "raid1", 00:13:20.679 "superblock": true, 00:13:20.679 "num_base_bdevs": 4, 00:13:20.679 "num_base_bdevs_discovered": 3, 00:13:20.679 "num_base_bdevs_operational": 4, 00:13:20.679 "base_bdevs_list": [ 00:13:20.679 { 00:13:20.679 "name": null, 00:13:20.679 "uuid": "70bf0fdb-c70e-44b3-bdea-f26f71bdd8d3", 00:13:20.679 "is_configured": false, 00:13:20.679 "data_offset": 0, 00:13:20.679 "data_size": 63488 00:13:20.679 }, 00:13:20.679 { 00:13:20.679 "name": "BaseBdev2", 00:13:20.679 "uuid": "7853846b-0720-4ac6-9595-a6f774dc63a0", 00:13:20.679 "is_configured": true, 00:13:20.679 "data_offset": 2048, 00:13:20.679 "data_size": 63488 00:13:20.679 }, 00:13:20.679 { 00:13:20.679 "name": "BaseBdev3", 00:13:20.679 "uuid": "3dd928cd-f02e-45b4-9be0-7c9fbc8f59a4", 00:13:20.679 "is_configured": true, 00:13:20.679 "data_offset": 2048, 00:13:20.679 "data_size": 63488 00:13:20.679 }, 00:13:20.679 { 00:13:20.679 "name": "BaseBdev4", 00:13:20.679 "uuid": "e56d7772-2b46-43ca-b403-7380416764a1", 00:13:20.679 "is_configured": true, 00:13:20.679 "data_offset": 2048, 00:13:20.679 "data_size": 63488 00:13:20.679 } 00:13:20.679 ] 00:13:20.679 }' 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.679 11:23:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 70bf0fdb-c70e-44b3-bdea-f26f71bdd8d3 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.249 [2024-11-20 11:23:04.246156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:21.249 NewBaseBdev 00:13:21.249 [2024-11-20 11:23:04.246498] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:21.249 [2024-11-20 11:23:04.246521] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:21.249 [2024-11-20 11:23:04.246790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:21.249 [2024-11-20 11:23:04.246953] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:21.249 [2024-11-20 11:23:04.246963] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:21.249 [2024-11-20 11:23:04.247105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.249 [ 00:13:21.249 { 00:13:21.249 "name": "NewBaseBdev", 00:13:21.249 "aliases": [ 00:13:21.249 "70bf0fdb-c70e-44b3-bdea-f26f71bdd8d3" 00:13:21.249 ], 00:13:21.249 "product_name": "Malloc disk", 00:13:21.249 "block_size": 512, 00:13:21.249 "num_blocks": 65536, 00:13:21.249 "uuid": "70bf0fdb-c70e-44b3-bdea-f26f71bdd8d3", 00:13:21.249 "assigned_rate_limits": { 00:13:21.249 "rw_ios_per_sec": 0, 00:13:21.249 "rw_mbytes_per_sec": 0, 00:13:21.249 "r_mbytes_per_sec": 0, 00:13:21.249 "w_mbytes_per_sec": 0 00:13:21.249 }, 00:13:21.249 "claimed": true, 00:13:21.249 "claim_type": "exclusive_write", 00:13:21.249 "zoned": false, 00:13:21.249 "supported_io_types": { 00:13:21.249 "read": true, 00:13:21.249 "write": true, 00:13:21.249 "unmap": true, 00:13:21.249 "flush": true, 00:13:21.249 "reset": true, 00:13:21.249 "nvme_admin": false, 00:13:21.249 "nvme_io": false, 00:13:21.249 "nvme_io_md": false, 00:13:21.249 "write_zeroes": true, 00:13:21.249 "zcopy": true, 00:13:21.249 "get_zone_info": false, 00:13:21.249 "zone_management": false, 00:13:21.249 "zone_append": false, 00:13:21.249 "compare": false, 00:13:21.249 "compare_and_write": false, 00:13:21.249 "abort": true, 00:13:21.249 "seek_hole": false, 00:13:21.249 "seek_data": false, 00:13:21.249 "copy": true, 00:13:21.249 "nvme_iov_md": false 00:13:21.249 }, 00:13:21.249 "memory_domains": [ 00:13:21.249 { 00:13:21.249 "dma_device_id": "system", 00:13:21.249 "dma_device_type": 1 00:13:21.249 }, 00:13:21.249 { 00:13:21.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.249 "dma_device_type": 2 00:13:21.249 } 00:13:21.249 ], 00:13:21.249 "driver_specific": {} 00:13:21.249 } 00:13:21.249 ] 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.249 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.249 "name": "Existed_Raid", 00:13:21.249 "uuid": "2387e2c6-ec1d-4c46-a61c-83ae2394b79f", 00:13:21.249 "strip_size_kb": 0, 00:13:21.249 "state": "online", 00:13:21.249 "raid_level": "raid1", 00:13:21.249 "superblock": true, 00:13:21.249 "num_base_bdevs": 4, 00:13:21.249 "num_base_bdevs_discovered": 4, 00:13:21.249 "num_base_bdevs_operational": 4, 00:13:21.249 "base_bdevs_list": [ 00:13:21.249 { 00:13:21.249 "name": "NewBaseBdev", 00:13:21.249 "uuid": "70bf0fdb-c70e-44b3-bdea-f26f71bdd8d3", 00:13:21.249 "is_configured": true, 00:13:21.249 "data_offset": 2048, 00:13:21.249 "data_size": 63488 00:13:21.249 }, 00:13:21.249 { 00:13:21.249 "name": "BaseBdev2", 00:13:21.249 "uuid": "7853846b-0720-4ac6-9595-a6f774dc63a0", 00:13:21.249 "is_configured": true, 00:13:21.249 "data_offset": 2048, 00:13:21.249 "data_size": 63488 00:13:21.249 }, 00:13:21.249 { 00:13:21.249 "name": "BaseBdev3", 00:13:21.249 "uuid": "3dd928cd-f02e-45b4-9be0-7c9fbc8f59a4", 00:13:21.250 "is_configured": true, 00:13:21.250 "data_offset": 2048, 00:13:21.250 "data_size": 63488 00:13:21.250 }, 00:13:21.250 { 00:13:21.250 "name": "BaseBdev4", 00:13:21.250 "uuid": "e56d7772-2b46-43ca-b403-7380416764a1", 00:13:21.250 "is_configured": true, 00:13:21.250 "data_offset": 2048, 00:13:21.250 "data_size": 63488 00:13:21.250 } 00:13:21.250 ] 00:13:21.250 }' 00:13:21.250 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.250 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.819 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:21.819 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:21.819 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:21.819 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:21.819 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:21.819 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:21.819 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:21.819 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:21.819 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.819 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.819 [2024-11-20 11:23:04.789732] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:21.819 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.819 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:21.819 "name": "Existed_Raid", 00:13:21.819 "aliases": [ 00:13:21.819 "2387e2c6-ec1d-4c46-a61c-83ae2394b79f" 00:13:21.819 ], 00:13:21.819 "product_name": "Raid Volume", 00:13:21.819 "block_size": 512, 00:13:21.819 "num_blocks": 63488, 00:13:21.819 "uuid": "2387e2c6-ec1d-4c46-a61c-83ae2394b79f", 00:13:21.819 "assigned_rate_limits": { 00:13:21.819 "rw_ios_per_sec": 0, 00:13:21.819 "rw_mbytes_per_sec": 0, 00:13:21.819 "r_mbytes_per_sec": 0, 00:13:21.819 "w_mbytes_per_sec": 0 00:13:21.819 }, 00:13:21.819 "claimed": false, 00:13:21.819 "zoned": false, 00:13:21.819 "supported_io_types": { 00:13:21.819 "read": true, 00:13:21.819 "write": true, 00:13:21.819 "unmap": false, 00:13:21.819 "flush": false, 00:13:21.819 "reset": true, 00:13:21.819 "nvme_admin": false, 00:13:21.819 "nvme_io": false, 00:13:21.819 "nvme_io_md": false, 00:13:21.819 "write_zeroes": true, 00:13:21.819 "zcopy": false, 00:13:21.819 "get_zone_info": false, 00:13:21.819 "zone_management": false, 00:13:21.819 "zone_append": false, 00:13:21.819 "compare": false, 00:13:21.819 "compare_and_write": false, 00:13:21.819 "abort": false, 00:13:21.819 "seek_hole": false, 00:13:21.819 "seek_data": false, 00:13:21.819 "copy": false, 00:13:21.819 "nvme_iov_md": false 00:13:21.819 }, 00:13:21.819 "memory_domains": [ 00:13:21.819 { 00:13:21.819 "dma_device_id": "system", 00:13:21.819 "dma_device_type": 1 00:13:21.819 }, 00:13:21.819 { 00:13:21.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.819 "dma_device_type": 2 00:13:21.819 }, 00:13:21.819 { 00:13:21.819 "dma_device_id": "system", 00:13:21.819 "dma_device_type": 1 00:13:21.819 }, 00:13:21.819 { 00:13:21.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.819 "dma_device_type": 2 00:13:21.819 }, 00:13:21.819 { 00:13:21.819 "dma_device_id": "system", 00:13:21.819 "dma_device_type": 1 00:13:21.819 }, 00:13:21.819 { 00:13:21.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.819 "dma_device_type": 2 00:13:21.819 }, 00:13:21.819 { 00:13:21.819 "dma_device_id": "system", 00:13:21.819 "dma_device_type": 1 00:13:21.819 }, 00:13:21.819 { 00:13:21.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.819 "dma_device_type": 2 00:13:21.819 } 00:13:21.819 ], 00:13:21.819 "driver_specific": { 00:13:21.819 "raid": { 00:13:21.819 "uuid": "2387e2c6-ec1d-4c46-a61c-83ae2394b79f", 00:13:21.819 "strip_size_kb": 0, 00:13:21.819 "state": "online", 00:13:21.819 "raid_level": "raid1", 00:13:21.819 "superblock": true, 00:13:21.819 "num_base_bdevs": 4, 00:13:21.819 "num_base_bdevs_discovered": 4, 00:13:21.819 "num_base_bdevs_operational": 4, 00:13:21.819 "base_bdevs_list": [ 00:13:21.819 { 00:13:21.819 "name": "NewBaseBdev", 00:13:21.819 "uuid": "70bf0fdb-c70e-44b3-bdea-f26f71bdd8d3", 00:13:21.819 "is_configured": true, 00:13:21.819 "data_offset": 2048, 00:13:21.819 "data_size": 63488 00:13:21.819 }, 00:13:21.819 { 00:13:21.819 "name": "BaseBdev2", 00:13:21.819 "uuid": "7853846b-0720-4ac6-9595-a6f774dc63a0", 00:13:21.819 "is_configured": true, 00:13:21.819 "data_offset": 2048, 00:13:21.819 "data_size": 63488 00:13:21.819 }, 00:13:21.819 { 00:13:21.819 "name": "BaseBdev3", 00:13:21.819 "uuid": "3dd928cd-f02e-45b4-9be0-7c9fbc8f59a4", 00:13:21.819 "is_configured": true, 00:13:21.819 "data_offset": 2048, 00:13:21.819 "data_size": 63488 00:13:21.819 }, 00:13:21.819 { 00:13:21.819 "name": "BaseBdev4", 00:13:21.819 "uuid": "e56d7772-2b46-43ca-b403-7380416764a1", 00:13:21.819 "is_configured": true, 00:13:21.819 "data_offset": 2048, 00:13:21.819 "data_size": 63488 00:13:21.819 } 00:13:21.819 ] 00:13:21.819 } 00:13:21.819 } 00:13:21.819 }' 00:13:21.819 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:21.819 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:21.819 BaseBdev2 00:13:21.819 BaseBdev3 00:13:21.819 BaseBdev4' 00:13:21.819 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.080 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:22.080 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.080 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:22.080 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.080 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.080 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.080 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.080 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.080 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.080 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.080 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:22.080 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.080 11:23:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.080 11:23:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.080 [2024-11-20 11:23:05.128722] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:22.080 [2024-11-20 11:23:05.128806] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:22.080 [2024-11-20 11:23:05.128921] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.080 [2024-11-20 11:23:05.129280] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:22.080 [2024-11-20 11:23:05.129346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74000 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74000 ']' 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74000 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74000 00:13:22.080 killing process with pid 74000 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74000' 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74000 00:13:22.080 [2024-11-20 11:23:05.174387] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:22.080 11:23:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74000 00:13:22.648 [2024-11-20 11:23:05.590825] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:24.028 11:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:24.028 ************************************ 00:13:24.028 END TEST raid_state_function_test_sb 00:13:24.028 ************************************ 00:13:24.028 00:13:24.028 real 0m11.968s 00:13:24.028 user 0m19.033s 00:13:24.028 sys 0m2.142s 00:13:24.028 11:23:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.028 11:23:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.028 11:23:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:13:24.028 11:23:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:24.029 11:23:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.029 11:23:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:24.029 ************************************ 00:13:24.029 START TEST raid_superblock_test 00:13:24.029 ************************************ 00:13:24.029 11:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:13:24.029 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:24.029 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:24.029 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:24.029 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:24.029 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:24.029 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:24.029 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:24.029 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:24.029 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:24.029 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:24.029 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:24.029 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:24.029 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:24.029 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:24.029 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:24.029 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74676 00:13:24.029 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:24.029 11:23:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74676 00:13:24.029 11:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74676 ']' 00:13:24.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.029 11:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.029 11:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.029 11:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.029 11:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.029 11:23:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.029 [2024-11-20 11:23:06.894794] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:13:24.029 [2024-11-20 11:23:06.894995] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74676 ] 00:13:24.029 [2024-11-20 11:23:07.066533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.287 [2024-11-20 11:23:07.176746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.287 [2024-11-20 11:23:07.374834] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:24.287 [2024-11-20 11:23:07.374945] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.856 malloc1 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.856 [2024-11-20 11:23:07.848041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:24.856 [2024-11-20 11:23:07.848188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.856 [2024-11-20 11:23:07.848232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:24.856 [2024-11-20 11:23:07.848267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.856 [2024-11-20 11:23:07.850405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.856 [2024-11-20 11:23:07.850487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:24.856 pt1 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.856 malloc2 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.856 [2024-11-20 11:23:07.905619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:24.856 [2024-11-20 11:23:07.905718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.856 [2024-11-20 11:23:07.905757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:24.856 [2024-11-20 11:23:07.905783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.856 [2024-11-20 11:23:07.907866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.856 [2024-11-20 11:23:07.907951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:24.856 pt2 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.856 malloc3 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.856 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:25.116 11:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.116 11:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.116 [2024-11-20 11:23:07.975496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:25.116 [2024-11-20 11:23:07.975643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.116 [2024-11-20 11:23:07.975685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:25.116 [2024-11-20 11:23:07.975714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.116 [2024-11-20 11:23:07.977965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.116 [2024-11-20 11:23:07.978047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:25.116 pt3 00:13:25.116 11:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.116 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:25.116 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:25.116 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:25.116 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:25.116 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:25.116 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:25.116 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:25.116 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:25.116 11:23:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:25.116 11:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.116 11:23:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.116 malloc4 00:13:25.116 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.116 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:25.116 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.116 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.116 [2024-11-20 11:23:08.034746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:25.116 [2024-11-20 11:23:08.034879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.116 [2024-11-20 11:23:08.034918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:25.116 [2024-11-20 11:23:08.034945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.116 [2024-11-20 11:23:08.037166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.116 [2024-11-20 11:23:08.037247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:25.116 pt4 00:13:25.116 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.116 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:25.116 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:25.116 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:25.116 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.116 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.116 [2024-11-20 11:23:08.046788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:25.116 [2024-11-20 11:23:08.048751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:25.116 [2024-11-20 11:23:08.048865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:25.116 [2024-11-20 11:23:08.048928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:25.116 [2024-11-20 11:23:08.049175] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:25.116 [2024-11-20 11:23:08.049227] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:25.116 [2024-11-20 11:23:08.049570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:25.116 [2024-11-20 11:23:08.049800] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:25.116 [2024-11-20 11:23:08.049851] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:25.116 [2024-11-20 11:23:08.050078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.116 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.116 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:25.116 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.116 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.117 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.117 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.117 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:25.117 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.117 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.117 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.117 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.117 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.117 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.117 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.117 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.117 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.117 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.117 "name": "raid_bdev1", 00:13:25.117 "uuid": "b2ede113-656b-4dc6-8643-3d3bef99a212", 00:13:25.117 "strip_size_kb": 0, 00:13:25.117 "state": "online", 00:13:25.117 "raid_level": "raid1", 00:13:25.117 "superblock": true, 00:13:25.117 "num_base_bdevs": 4, 00:13:25.117 "num_base_bdevs_discovered": 4, 00:13:25.117 "num_base_bdevs_operational": 4, 00:13:25.117 "base_bdevs_list": [ 00:13:25.117 { 00:13:25.117 "name": "pt1", 00:13:25.117 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:25.117 "is_configured": true, 00:13:25.117 "data_offset": 2048, 00:13:25.117 "data_size": 63488 00:13:25.117 }, 00:13:25.117 { 00:13:25.117 "name": "pt2", 00:13:25.117 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:25.117 "is_configured": true, 00:13:25.117 "data_offset": 2048, 00:13:25.117 "data_size": 63488 00:13:25.117 }, 00:13:25.117 { 00:13:25.117 "name": "pt3", 00:13:25.117 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:25.117 "is_configured": true, 00:13:25.117 "data_offset": 2048, 00:13:25.117 "data_size": 63488 00:13:25.117 }, 00:13:25.117 { 00:13:25.117 "name": "pt4", 00:13:25.117 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:25.117 "is_configured": true, 00:13:25.117 "data_offset": 2048, 00:13:25.117 "data_size": 63488 00:13:25.117 } 00:13:25.117 ] 00:13:25.117 }' 00:13:25.117 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.117 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.376 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:25.376 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:25.376 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:25.376 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:25.376 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:25.376 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:25.376 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:25.376 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:25.376 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.376 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.376 [2024-11-20 11:23:08.466396] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:25.636 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.636 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:25.636 "name": "raid_bdev1", 00:13:25.636 "aliases": [ 00:13:25.636 "b2ede113-656b-4dc6-8643-3d3bef99a212" 00:13:25.636 ], 00:13:25.636 "product_name": "Raid Volume", 00:13:25.636 "block_size": 512, 00:13:25.636 "num_blocks": 63488, 00:13:25.636 "uuid": "b2ede113-656b-4dc6-8643-3d3bef99a212", 00:13:25.636 "assigned_rate_limits": { 00:13:25.636 "rw_ios_per_sec": 0, 00:13:25.636 "rw_mbytes_per_sec": 0, 00:13:25.636 "r_mbytes_per_sec": 0, 00:13:25.636 "w_mbytes_per_sec": 0 00:13:25.636 }, 00:13:25.636 "claimed": false, 00:13:25.636 "zoned": false, 00:13:25.636 "supported_io_types": { 00:13:25.636 "read": true, 00:13:25.636 "write": true, 00:13:25.636 "unmap": false, 00:13:25.636 "flush": false, 00:13:25.636 "reset": true, 00:13:25.636 "nvme_admin": false, 00:13:25.636 "nvme_io": false, 00:13:25.636 "nvme_io_md": false, 00:13:25.636 "write_zeroes": true, 00:13:25.636 "zcopy": false, 00:13:25.636 "get_zone_info": false, 00:13:25.636 "zone_management": false, 00:13:25.636 "zone_append": false, 00:13:25.636 "compare": false, 00:13:25.636 "compare_and_write": false, 00:13:25.636 "abort": false, 00:13:25.636 "seek_hole": false, 00:13:25.636 "seek_data": false, 00:13:25.636 "copy": false, 00:13:25.636 "nvme_iov_md": false 00:13:25.636 }, 00:13:25.636 "memory_domains": [ 00:13:25.636 { 00:13:25.636 "dma_device_id": "system", 00:13:25.636 "dma_device_type": 1 00:13:25.636 }, 00:13:25.636 { 00:13:25.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.636 "dma_device_type": 2 00:13:25.636 }, 00:13:25.636 { 00:13:25.636 "dma_device_id": "system", 00:13:25.636 "dma_device_type": 1 00:13:25.636 }, 00:13:25.636 { 00:13:25.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.636 "dma_device_type": 2 00:13:25.636 }, 00:13:25.636 { 00:13:25.636 "dma_device_id": "system", 00:13:25.636 "dma_device_type": 1 00:13:25.636 }, 00:13:25.636 { 00:13:25.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.636 "dma_device_type": 2 00:13:25.636 }, 00:13:25.636 { 00:13:25.636 "dma_device_id": "system", 00:13:25.636 "dma_device_type": 1 00:13:25.636 }, 00:13:25.636 { 00:13:25.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.636 "dma_device_type": 2 00:13:25.636 } 00:13:25.636 ], 00:13:25.636 "driver_specific": { 00:13:25.636 "raid": { 00:13:25.636 "uuid": "b2ede113-656b-4dc6-8643-3d3bef99a212", 00:13:25.636 "strip_size_kb": 0, 00:13:25.636 "state": "online", 00:13:25.636 "raid_level": "raid1", 00:13:25.636 "superblock": true, 00:13:25.636 "num_base_bdevs": 4, 00:13:25.636 "num_base_bdevs_discovered": 4, 00:13:25.636 "num_base_bdevs_operational": 4, 00:13:25.636 "base_bdevs_list": [ 00:13:25.636 { 00:13:25.636 "name": "pt1", 00:13:25.636 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:25.636 "is_configured": true, 00:13:25.636 "data_offset": 2048, 00:13:25.636 "data_size": 63488 00:13:25.636 }, 00:13:25.636 { 00:13:25.636 "name": "pt2", 00:13:25.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:25.636 "is_configured": true, 00:13:25.636 "data_offset": 2048, 00:13:25.637 "data_size": 63488 00:13:25.637 }, 00:13:25.637 { 00:13:25.637 "name": "pt3", 00:13:25.637 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:25.637 "is_configured": true, 00:13:25.637 "data_offset": 2048, 00:13:25.637 "data_size": 63488 00:13:25.637 }, 00:13:25.637 { 00:13:25.637 "name": "pt4", 00:13:25.637 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:25.637 "is_configured": true, 00:13:25.637 "data_offset": 2048, 00:13:25.637 "data_size": 63488 00:13:25.637 } 00:13:25.637 ] 00:13:25.637 } 00:13:25.637 } 00:13:25.637 }' 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:25.637 pt2 00:13:25.637 pt3 00:13:25.637 pt4' 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.637 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.897 [2024-11-20 11:23:08.825758] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b2ede113-656b-4dc6-8643-3d3bef99a212 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b2ede113-656b-4dc6-8643-3d3bef99a212 ']' 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.897 [2024-11-20 11:23:08.869346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:25.897 [2024-11-20 11:23:08.869423] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:25.897 [2024-11-20 11:23:08.869551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:25.897 [2024-11-20 11:23:08.869671] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:25.897 [2024-11-20 11:23:08.869742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.897 11:23:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.157 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.157 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:26.157 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:26.157 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:26.157 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:26.157 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:26.157 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:26.157 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:26.157 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:26.157 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:26.157 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.157 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.157 [2024-11-20 11:23:09.037088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:26.157 [2024-11-20 11:23:09.039179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:26.157 [2024-11-20 11:23:09.039284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:26.157 [2024-11-20 11:23:09.039340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:26.157 [2024-11-20 11:23:09.039427] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:26.157 [2024-11-20 11:23:09.039610] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:26.157 [2024-11-20 11:23:09.039676] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:26.157 [2024-11-20 11:23:09.039741] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:26.157 [2024-11-20 11:23:09.039789] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:26.157 [2024-11-20 11:23:09.039822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:26.157 request: 00:13:26.157 { 00:13:26.157 "name": "raid_bdev1", 00:13:26.157 "raid_level": "raid1", 00:13:26.157 "base_bdevs": [ 00:13:26.157 "malloc1", 00:13:26.157 "malloc2", 00:13:26.157 "malloc3", 00:13:26.157 "malloc4" 00:13:26.157 ], 00:13:26.157 "superblock": false, 00:13:26.157 "method": "bdev_raid_create", 00:13:26.157 "req_id": 1 00:13:26.157 } 00:13:26.157 Got JSON-RPC error response 00:13:26.157 response: 00:13:26.157 { 00:13:26.157 "code": -17, 00:13:26.157 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:26.157 } 00:13:26.157 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:26.157 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:26.157 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:26.157 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:26.157 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:26.157 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.157 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:26.157 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.157 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.157 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.157 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:26.157 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:26.157 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:26.158 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.158 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.158 [2024-11-20 11:23:09.104949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:26.158 [2024-11-20 11:23:09.105095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.158 [2024-11-20 11:23:09.105131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:26.158 [2024-11-20 11:23:09.105161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.158 [2024-11-20 11:23:09.107333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.158 [2024-11-20 11:23:09.107429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:26.158 [2024-11-20 11:23:09.107563] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:26.158 [2024-11-20 11:23:09.107682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:26.158 pt1 00:13:26.158 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.158 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:26.158 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.158 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.158 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.158 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.158 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.158 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.158 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.158 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.158 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.158 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.158 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.158 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.158 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.158 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.158 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.158 "name": "raid_bdev1", 00:13:26.158 "uuid": "b2ede113-656b-4dc6-8643-3d3bef99a212", 00:13:26.158 "strip_size_kb": 0, 00:13:26.158 "state": "configuring", 00:13:26.158 "raid_level": "raid1", 00:13:26.158 "superblock": true, 00:13:26.158 "num_base_bdevs": 4, 00:13:26.158 "num_base_bdevs_discovered": 1, 00:13:26.158 "num_base_bdevs_operational": 4, 00:13:26.158 "base_bdevs_list": [ 00:13:26.158 { 00:13:26.158 "name": "pt1", 00:13:26.158 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:26.158 "is_configured": true, 00:13:26.158 "data_offset": 2048, 00:13:26.158 "data_size": 63488 00:13:26.158 }, 00:13:26.158 { 00:13:26.158 "name": null, 00:13:26.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:26.158 "is_configured": false, 00:13:26.158 "data_offset": 2048, 00:13:26.158 "data_size": 63488 00:13:26.158 }, 00:13:26.158 { 00:13:26.158 "name": null, 00:13:26.158 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:26.158 "is_configured": false, 00:13:26.158 "data_offset": 2048, 00:13:26.158 "data_size": 63488 00:13:26.158 }, 00:13:26.158 { 00:13:26.158 "name": null, 00:13:26.158 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:26.158 "is_configured": false, 00:13:26.158 "data_offset": 2048, 00:13:26.158 "data_size": 63488 00:13:26.158 } 00:13:26.158 ] 00:13:26.158 }' 00:13:26.158 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.158 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.467 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:26.467 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:26.467 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.467 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.467 [2024-11-20 11:23:09.524256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:26.467 [2024-11-20 11:23:09.524389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.467 [2024-11-20 11:23:09.524432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:26.467 [2024-11-20 11:23:09.524485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.467 [2024-11-20 11:23:09.525022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.467 [2024-11-20 11:23:09.525094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:26.467 [2024-11-20 11:23:09.525223] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:26.467 [2024-11-20 11:23:09.525294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:26.467 pt2 00:13:26.467 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.467 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:26.467 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.467 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.467 [2024-11-20 11:23:09.536222] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:26.467 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.467 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:26.467 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.467 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.467 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.467 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.467 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.467 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.467 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.467 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.467 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.467 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.467 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.467 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.467 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.468 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.728 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.728 "name": "raid_bdev1", 00:13:26.728 "uuid": "b2ede113-656b-4dc6-8643-3d3bef99a212", 00:13:26.728 "strip_size_kb": 0, 00:13:26.728 "state": "configuring", 00:13:26.728 "raid_level": "raid1", 00:13:26.728 "superblock": true, 00:13:26.728 "num_base_bdevs": 4, 00:13:26.728 "num_base_bdevs_discovered": 1, 00:13:26.728 "num_base_bdevs_operational": 4, 00:13:26.728 "base_bdevs_list": [ 00:13:26.728 { 00:13:26.728 "name": "pt1", 00:13:26.728 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:26.728 "is_configured": true, 00:13:26.728 "data_offset": 2048, 00:13:26.728 "data_size": 63488 00:13:26.728 }, 00:13:26.728 { 00:13:26.728 "name": null, 00:13:26.728 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:26.728 "is_configured": false, 00:13:26.728 "data_offset": 0, 00:13:26.728 "data_size": 63488 00:13:26.728 }, 00:13:26.728 { 00:13:26.728 "name": null, 00:13:26.728 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:26.728 "is_configured": false, 00:13:26.728 "data_offset": 2048, 00:13:26.728 "data_size": 63488 00:13:26.728 }, 00:13:26.728 { 00:13:26.728 "name": null, 00:13:26.728 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:26.728 "is_configured": false, 00:13:26.728 "data_offset": 2048, 00:13:26.728 "data_size": 63488 00:13:26.728 } 00:13:26.728 ] 00:13:26.728 }' 00:13:26.728 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.728 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.987 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:26.987 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:26.987 11:23:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:26.987 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.987 11:23:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.987 [2024-11-20 11:23:10.003464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:26.987 [2024-11-20 11:23:10.003626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.987 [2024-11-20 11:23:10.003678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:26.987 [2024-11-20 11:23:10.003718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.987 [2024-11-20 11:23:10.004236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.987 [2024-11-20 11:23:10.004301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:26.987 [2024-11-20 11:23:10.004434] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:26.987 [2024-11-20 11:23:10.004510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:26.987 pt2 00:13:26.987 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.987 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:26.987 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:26.987 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:26.987 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.987 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.987 [2024-11-20 11:23:10.015402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:26.987 [2024-11-20 11:23:10.015534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.987 [2024-11-20 11:23:10.015581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:26.987 [2024-11-20 11:23:10.015617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.987 [2024-11-20 11:23:10.016093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.987 [2024-11-20 11:23:10.016153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:26.987 [2024-11-20 11:23:10.016269] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:26.987 [2024-11-20 11:23:10.016323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:26.987 pt3 00:13:26.987 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.987 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:26.987 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:26.987 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:26.987 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.987 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.987 [2024-11-20 11:23:10.027357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:26.987 [2024-11-20 11:23:10.027408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.987 [2024-11-20 11:23:10.027426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:26.987 [2024-11-20 11:23:10.027435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.987 [2024-11-20 11:23:10.027919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.987 [2024-11-20 11:23:10.027946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:26.987 [2024-11-20 11:23:10.028022] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:26.987 [2024-11-20 11:23:10.028043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:26.988 [2024-11-20 11:23:10.028200] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:26.988 [2024-11-20 11:23:10.028210] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:26.988 [2024-11-20 11:23:10.028518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:26.988 [2024-11-20 11:23:10.028695] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:26.988 [2024-11-20 11:23:10.028710] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:26.988 [2024-11-20 11:23:10.028890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.988 pt4 00:13:26.988 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.988 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:26.988 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:26.988 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:26.988 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.988 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.988 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.988 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.988 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.988 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.988 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.988 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.988 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.988 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.988 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.988 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.988 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.988 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.988 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.988 "name": "raid_bdev1", 00:13:26.988 "uuid": "b2ede113-656b-4dc6-8643-3d3bef99a212", 00:13:26.988 "strip_size_kb": 0, 00:13:26.988 "state": "online", 00:13:26.988 "raid_level": "raid1", 00:13:26.988 "superblock": true, 00:13:26.988 "num_base_bdevs": 4, 00:13:26.988 "num_base_bdevs_discovered": 4, 00:13:26.988 "num_base_bdevs_operational": 4, 00:13:26.988 "base_bdevs_list": [ 00:13:26.988 { 00:13:26.988 "name": "pt1", 00:13:26.988 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:26.988 "is_configured": true, 00:13:26.988 "data_offset": 2048, 00:13:26.988 "data_size": 63488 00:13:26.988 }, 00:13:26.988 { 00:13:26.988 "name": "pt2", 00:13:26.988 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:26.988 "is_configured": true, 00:13:26.988 "data_offset": 2048, 00:13:26.988 "data_size": 63488 00:13:26.988 }, 00:13:26.988 { 00:13:26.988 "name": "pt3", 00:13:26.988 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:26.988 "is_configured": true, 00:13:26.988 "data_offset": 2048, 00:13:26.988 "data_size": 63488 00:13:26.988 }, 00:13:26.988 { 00:13:26.988 "name": "pt4", 00:13:26.988 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:26.988 "is_configured": true, 00:13:26.988 "data_offset": 2048, 00:13:26.988 "data_size": 63488 00:13:26.988 } 00:13:26.988 ] 00:13:26.988 }' 00:13:26.988 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.988 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.557 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:27.557 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:27.557 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:27.557 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:27.557 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:27.557 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:27.557 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:27.557 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:27.557 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.557 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.557 [2024-11-20 11:23:10.530886] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:27.557 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.557 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:27.557 "name": "raid_bdev1", 00:13:27.557 "aliases": [ 00:13:27.557 "b2ede113-656b-4dc6-8643-3d3bef99a212" 00:13:27.557 ], 00:13:27.557 "product_name": "Raid Volume", 00:13:27.557 "block_size": 512, 00:13:27.557 "num_blocks": 63488, 00:13:27.557 "uuid": "b2ede113-656b-4dc6-8643-3d3bef99a212", 00:13:27.557 "assigned_rate_limits": { 00:13:27.557 "rw_ios_per_sec": 0, 00:13:27.557 "rw_mbytes_per_sec": 0, 00:13:27.557 "r_mbytes_per_sec": 0, 00:13:27.557 "w_mbytes_per_sec": 0 00:13:27.557 }, 00:13:27.557 "claimed": false, 00:13:27.557 "zoned": false, 00:13:27.557 "supported_io_types": { 00:13:27.557 "read": true, 00:13:27.557 "write": true, 00:13:27.557 "unmap": false, 00:13:27.557 "flush": false, 00:13:27.557 "reset": true, 00:13:27.557 "nvme_admin": false, 00:13:27.557 "nvme_io": false, 00:13:27.557 "nvme_io_md": false, 00:13:27.557 "write_zeroes": true, 00:13:27.557 "zcopy": false, 00:13:27.557 "get_zone_info": false, 00:13:27.557 "zone_management": false, 00:13:27.557 "zone_append": false, 00:13:27.557 "compare": false, 00:13:27.557 "compare_and_write": false, 00:13:27.557 "abort": false, 00:13:27.557 "seek_hole": false, 00:13:27.557 "seek_data": false, 00:13:27.557 "copy": false, 00:13:27.557 "nvme_iov_md": false 00:13:27.557 }, 00:13:27.557 "memory_domains": [ 00:13:27.557 { 00:13:27.557 "dma_device_id": "system", 00:13:27.557 "dma_device_type": 1 00:13:27.557 }, 00:13:27.557 { 00:13:27.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.557 "dma_device_type": 2 00:13:27.557 }, 00:13:27.557 { 00:13:27.557 "dma_device_id": "system", 00:13:27.557 "dma_device_type": 1 00:13:27.557 }, 00:13:27.557 { 00:13:27.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.557 "dma_device_type": 2 00:13:27.557 }, 00:13:27.557 { 00:13:27.557 "dma_device_id": "system", 00:13:27.557 "dma_device_type": 1 00:13:27.557 }, 00:13:27.557 { 00:13:27.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.557 "dma_device_type": 2 00:13:27.557 }, 00:13:27.557 { 00:13:27.557 "dma_device_id": "system", 00:13:27.557 "dma_device_type": 1 00:13:27.557 }, 00:13:27.557 { 00:13:27.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.557 "dma_device_type": 2 00:13:27.557 } 00:13:27.557 ], 00:13:27.557 "driver_specific": { 00:13:27.557 "raid": { 00:13:27.557 "uuid": "b2ede113-656b-4dc6-8643-3d3bef99a212", 00:13:27.557 "strip_size_kb": 0, 00:13:27.557 "state": "online", 00:13:27.557 "raid_level": "raid1", 00:13:27.557 "superblock": true, 00:13:27.557 "num_base_bdevs": 4, 00:13:27.557 "num_base_bdevs_discovered": 4, 00:13:27.557 "num_base_bdevs_operational": 4, 00:13:27.557 "base_bdevs_list": [ 00:13:27.557 { 00:13:27.557 "name": "pt1", 00:13:27.557 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:27.557 "is_configured": true, 00:13:27.557 "data_offset": 2048, 00:13:27.557 "data_size": 63488 00:13:27.557 }, 00:13:27.557 { 00:13:27.557 "name": "pt2", 00:13:27.557 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:27.557 "is_configured": true, 00:13:27.557 "data_offset": 2048, 00:13:27.557 "data_size": 63488 00:13:27.557 }, 00:13:27.557 { 00:13:27.557 "name": "pt3", 00:13:27.557 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:27.557 "is_configured": true, 00:13:27.557 "data_offset": 2048, 00:13:27.557 "data_size": 63488 00:13:27.557 }, 00:13:27.557 { 00:13:27.557 "name": "pt4", 00:13:27.557 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:27.557 "is_configured": true, 00:13:27.557 "data_offset": 2048, 00:13:27.557 "data_size": 63488 00:13:27.557 } 00:13:27.557 ] 00:13:27.557 } 00:13:27.557 } 00:13:27.557 }' 00:13:27.557 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:27.557 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:27.557 pt2 00:13:27.557 pt3 00:13:27.557 pt4' 00:13:27.557 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.557 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:27.557 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:27.557 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:27.557 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.557 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.557 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.557 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:27.817 [2024-11-20 11:23:10.866299] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b2ede113-656b-4dc6-8643-3d3bef99a212 '!=' b2ede113-656b-4dc6-8643-3d3bef99a212 ']' 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.817 [2024-11-20 11:23:10.913942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.817 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.077 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.077 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.077 "name": "raid_bdev1", 00:13:28.077 "uuid": "b2ede113-656b-4dc6-8643-3d3bef99a212", 00:13:28.077 "strip_size_kb": 0, 00:13:28.077 "state": "online", 00:13:28.077 "raid_level": "raid1", 00:13:28.077 "superblock": true, 00:13:28.077 "num_base_bdevs": 4, 00:13:28.077 "num_base_bdevs_discovered": 3, 00:13:28.077 "num_base_bdevs_operational": 3, 00:13:28.077 "base_bdevs_list": [ 00:13:28.077 { 00:13:28.077 "name": null, 00:13:28.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.077 "is_configured": false, 00:13:28.077 "data_offset": 0, 00:13:28.077 "data_size": 63488 00:13:28.077 }, 00:13:28.077 { 00:13:28.077 "name": "pt2", 00:13:28.077 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:28.077 "is_configured": true, 00:13:28.077 "data_offset": 2048, 00:13:28.077 "data_size": 63488 00:13:28.077 }, 00:13:28.077 { 00:13:28.077 "name": "pt3", 00:13:28.077 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:28.077 "is_configured": true, 00:13:28.077 "data_offset": 2048, 00:13:28.077 "data_size": 63488 00:13:28.077 }, 00:13:28.077 { 00:13:28.077 "name": "pt4", 00:13:28.077 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:28.077 "is_configured": true, 00:13:28.077 "data_offset": 2048, 00:13:28.077 "data_size": 63488 00:13:28.077 } 00:13:28.077 ] 00:13:28.077 }' 00:13:28.077 11:23:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.077 11:23:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.336 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:28.336 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.336 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.336 [2024-11-20 11:23:11.389058] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:28.336 [2024-11-20 11:23:11.389142] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:28.336 [2024-11-20 11:23:11.389249] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:28.336 [2024-11-20 11:23:11.389348] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:28.336 [2024-11-20 11:23:11.389400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:28.336 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.336 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.336 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:28.336 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.336 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.336 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.336 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:28.336 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:28.336 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:28.336 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:28.336 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:28.336 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.336 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.595 [2024-11-20 11:23:11.484886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:28.595 [2024-11-20 11:23:11.484992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.595 [2024-11-20 11:23:11.485031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:28.595 [2024-11-20 11:23:11.485060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.595 [2024-11-20 11:23:11.487401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.595 [2024-11-20 11:23:11.487514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:28.595 [2024-11-20 11:23:11.487660] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:28.595 [2024-11-20 11:23:11.487759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:28.595 pt2 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.595 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.595 "name": "raid_bdev1", 00:13:28.595 "uuid": "b2ede113-656b-4dc6-8643-3d3bef99a212", 00:13:28.595 "strip_size_kb": 0, 00:13:28.595 "state": "configuring", 00:13:28.595 "raid_level": "raid1", 00:13:28.595 "superblock": true, 00:13:28.595 "num_base_bdevs": 4, 00:13:28.595 "num_base_bdevs_discovered": 1, 00:13:28.595 "num_base_bdevs_operational": 3, 00:13:28.595 "base_bdevs_list": [ 00:13:28.595 { 00:13:28.595 "name": null, 00:13:28.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.595 "is_configured": false, 00:13:28.595 "data_offset": 2048, 00:13:28.595 "data_size": 63488 00:13:28.595 }, 00:13:28.595 { 00:13:28.595 "name": "pt2", 00:13:28.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:28.595 "is_configured": true, 00:13:28.595 "data_offset": 2048, 00:13:28.595 "data_size": 63488 00:13:28.595 }, 00:13:28.595 { 00:13:28.595 "name": null, 00:13:28.595 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:28.595 "is_configured": false, 00:13:28.596 "data_offset": 2048, 00:13:28.596 "data_size": 63488 00:13:28.596 }, 00:13:28.596 { 00:13:28.596 "name": null, 00:13:28.596 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:28.596 "is_configured": false, 00:13:28.596 "data_offset": 2048, 00:13:28.596 "data_size": 63488 00:13:28.596 } 00:13:28.596 ] 00:13:28.596 }' 00:13:28.596 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.596 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.855 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:28.855 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:28.855 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:28.855 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.855 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.855 [2024-11-20 11:23:11.952148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:28.855 [2024-11-20 11:23:11.952270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.855 [2024-11-20 11:23:11.952318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:28.855 [2024-11-20 11:23:11.952331] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.855 [2024-11-20 11:23:11.952880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.855 [2024-11-20 11:23:11.952912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:28.855 [2024-11-20 11:23:11.953019] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:28.855 [2024-11-20 11:23:11.953045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:28.855 pt3 00:13:28.855 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.855 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:28.855 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.855 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.855 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.855 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.855 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:28.855 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.855 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.855 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.855 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.855 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.855 11:23:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.855 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.855 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.128 11:23:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.128 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.128 "name": "raid_bdev1", 00:13:29.128 "uuid": "b2ede113-656b-4dc6-8643-3d3bef99a212", 00:13:29.128 "strip_size_kb": 0, 00:13:29.128 "state": "configuring", 00:13:29.128 "raid_level": "raid1", 00:13:29.128 "superblock": true, 00:13:29.128 "num_base_bdevs": 4, 00:13:29.128 "num_base_bdevs_discovered": 2, 00:13:29.128 "num_base_bdevs_operational": 3, 00:13:29.128 "base_bdevs_list": [ 00:13:29.128 { 00:13:29.128 "name": null, 00:13:29.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.128 "is_configured": false, 00:13:29.128 "data_offset": 2048, 00:13:29.128 "data_size": 63488 00:13:29.128 }, 00:13:29.128 { 00:13:29.128 "name": "pt2", 00:13:29.128 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:29.128 "is_configured": true, 00:13:29.128 "data_offset": 2048, 00:13:29.128 "data_size": 63488 00:13:29.128 }, 00:13:29.128 { 00:13:29.128 "name": "pt3", 00:13:29.128 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:29.128 "is_configured": true, 00:13:29.128 "data_offset": 2048, 00:13:29.128 "data_size": 63488 00:13:29.128 }, 00:13:29.128 { 00:13:29.128 "name": null, 00:13:29.128 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:29.128 "is_configured": false, 00:13:29.128 "data_offset": 2048, 00:13:29.128 "data_size": 63488 00:13:29.128 } 00:13:29.128 ] 00:13:29.128 }' 00:13:29.128 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.128 11:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.389 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:29.389 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:29.389 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:13:29.389 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:29.389 11:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.389 11:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.389 [2024-11-20 11:23:12.419367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:29.389 [2024-11-20 11:23:12.419523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.389 [2024-11-20 11:23:12.419569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:29.389 [2024-11-20 11:23:12.419598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.389 [2024-11-20 11:23:12.420071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.389 [2024-11-20 11:23:12.420130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:29.389 [2024-11-20 11:23:12.420243] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:29.389 [2024-11-20 11:23:12.420301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:29.389 [2024-11-20 11:23:12.420498] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:29.389 [2024-11-20 11:23:12.420539] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:29.389 [2024-11-20 11:23:12.420799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:29.389 [2024-11-20 11:23:12.420997] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:29.389 [2024-11-20 11:23:12.421044] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:29.390 [2024-11-20 11:23:12.421220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.390 pt4 00:13:29.390 11:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.390 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:29.390 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.390 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.390 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.390 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.390 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.390 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.390 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.390 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.390 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.390 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.390 11:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.390 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.390 11:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.390 11:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.390 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.390 "name": "raid_bdev1", 00:13:29.390 "uuid": "b2ede113-656b-4dc6-8643-3d3bef99a212", 00:13:29.390 "strip_size_kb": 0, 00:13:29.390 "state": "online", 00:13:29.390 "raid_level": "raid1", 00:13:29.390 "superblock": true, 00:13:29.390 "num_base_bdevs": 4, 00:13:29.390 "num_base_bdevs_discovered": 3, 00:13:29.390 "num_base_bdevs_operational": 3, 00:13:29.390 "base_bdevs_list": [ 00:13:29.390 { 00:13:29.390 "name": null, 00:13:29.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.390 "is_configured": false, 00:13:29.390 "data_offset": 2048, 00:13:29.390 "data_size": 63488 00:13:29.390 }, 00:13:29.390 { 00:13:29.390 "name": "pt2", 00:13:29.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:29.390 "is_configured": true, 00:13:29.390 "data_offset": 2048, 00:13:29.390 "data_size": 63488 00:13:29.390 }, 00:13:29.390 { 00:13:29.390 "name": "pt3", 00:13:29.390 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:29.390 "is_configured": true, 00:13:29.390 "data_offset": 2048, 00:13:29.390 "data_size": 63488 00:13:29.390 }, 00:13:29.390 { 00:13:29.390 "name": "pt4", 00:13:29.390 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:29.390 "is_configured": true, 00:13:29.390 "data_offset": 2048, 00:13:29.390 "data_size": 63488 00:13:29.390 } 00:13:29.390 ] 00:13:29.390 }' 00:13:29.390 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.390 11:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.967 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.968 [2024-11-20 11:23:12.890528] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:29.968 [2024-11-20 11:23:12.890604] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:29.968 [2024-11-20 11:23:12.890724] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:29.968 [2024-11-20 11:23:12.890816] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:29.968 [2024-11-20 11:23:12.890858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.968 [2024-11-20 11:23:12.970392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:29.968 [2024-11-20 11:23:12.970543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.968 [2024-11-20 11:23:12.970588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:13:29.968 [2024-11-20 11:23:12.970652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.968 [2024-11-20 11:23:12.973017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.968 [2024-11-20 11:23:12.973111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:29.968 [2024-11-20 11:23:12.973253] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:29.968 [2024-11-20 11:23:12.973344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:29.968 [2024-11-20 11:23:12.973558] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:29.968 [2024-11-20 11:23:12.973628] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:29.968 [2024-11-20 11:23:12.973692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:29.968 [2024-11-20 11:23:12.973839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:29.968 [2024-11-20 11:23:12.974001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:29.968 pt1 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.968 11:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.968 11:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.968 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.968 "name": "raid_bdev1", 00:13:29.968 "uuid": "b2ede113-656b-4dc6-8643-3d3bef99a212", 00:13:29.968 "strip_size_kb": 0, 00:13:29.968 "state": "configuring", 00:13:29.968 "raid_level": "raid1", 00:13:29.968 "superblock": true, 00:13:29.968 "num_base_bdevs": 4, 00:13:29.968 "num_base_bdevs_discovered": 2, 00:13:29.968 "num_base_bdevs_operational": 3, 00:13:29.968 "base_bdevs_list": [ 00:13:29.968 { 00:13:29.968 "name": null, 00:13:29.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.968 "is_configured": false, 00:13:29.968 "data_offset": 2048, 00:13:29.968 "data_size": 63488 00:13:29.968 }, 00:13:29.968 { 00:13:29.968 "name": "pt2", 00:13:29.968 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:29.968 "is_configured": true, 00:13:29.968 "data_offset": 2048, 00:13:29.968 "data_size": 63488 00:13:29.968 }, 00:13:29.968 { 00:13:29.968 "name": "pt3", 00:13:29.968 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:29.968 "is_configured": true, 00:13:29.968 "data_offset": 2048, 00:13:29.968 "data_size": 63488 00:13:29.968 }, 00:13:29.968 { 00:13:29.968 "name": null, 00:13:29.968 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:29.968 "is_configured": false, 00:13:29.968 "data_offset": 2048, 00:13:29.968 "data_size": 63488 00:13:29.968 } 00:13:29.968 ] 00:13:29.968 }' 00:13:29.968 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.968 11:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.537 [2024-11-20 11:23:13.501518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:30.537 [2024-11-20 11:23:13.501637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.537 [2024-11-20 11:23:13.501677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:30.537 [2024-11-20 11:23:13.501706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.537 [2024-11-20 11:23:13.502163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.537 [2024-11-20 11:23:13.502225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:30.537 [2024-11-20 11:23:13.502345] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:30.537 [2024-11-20 11:23:13.502406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:30.537 [2024-11-20 11:23:13.502581] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:30.537 [2024-11-20 11:23:13.502622] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:30.537 [2024-11-20 11:23:13.502894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:30.537 [2024-11-20 11:23:13.503077] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:30.537 [2024-11-20 11:23:13.503119] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:30.537 [2024-11-20 11:23:13.503302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.537 pt4 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.537 "name": "raid_bdev1", 00:13:30.537 "uuid": "b2ede113-656b-4dc6-8643-3d3bef99a212", 00:13:30.537 "strip_size_kb": 0, 00:13:30.537 "state": "online", 00:13:30.537 "raid_level": "raid1", 00:13:30.537 "superblock": true, 00:13:30.537 "num_base_bdevs": 4, 00:13:30.537 "num_base_bdevs_discovered": 3, 00:13:30.537 "num_base_bdevs_operational": 3, 00:13:30.537 "base_bdevs_list": [ 00:13:30.537 { 00:13:30.537 "name": null, 00:13:30.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.537 "is_configured": false, 00:13:30.537 "data_offset": 2048, 00:13:30.537 "data_size": 63488 00:13:30.537 }, 00:13:30.537 { 00:13:30.537 "name": "pt2", 00:13:30.537 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:30.537 "is_configured": true, 00:13:30.537 "data_offset": 2048, 00:13:30.537 "data_size": 63488 00:13:30.537 }, 00:13:30.537 { 00:13:30.537 "name": "pt3", 00:13:30.537 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:30.537 "is_configured": true, 00:13:30.537 "data_offset": 2048, 00:13:30.537 "data_size": 63488 00:13:30.537 }, 00:13:30.537 { 00:13:30.537 "name": "pt4", 00:13:30.537 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:30.537 "is_configured": true, 00:13:30.537 "data_offset": 2048, 00:13:30.537 "data_size": 63488 00:13:30.537 } 00:13:30.537 ] 00:13:30.537 }' 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.537 11:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.106 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:31.106 11:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.106 11:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.106 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:31.106 11:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.106 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:31.106 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:31.106 11:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:31.106 11:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.106 11:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.106 [2024-11-20 11:23:13.992999] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:31.106 11:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.106 11:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' b2ede113-656b-4dc6-8643-3d3bef99a212 '!=' b2ede113-656b-4dc6-8643-3d3bef99a212 ']' 00:13:31.106 11:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74676 00:13:31.106 11:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74676 ']' 00:13:31.106 11:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74676 00:13:31.106 11:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:31.106 11:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:31.106 11:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74676 00:13:31.106 11:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:31.106 11:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:31.106 11:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74676' 00:13:31.106 killing process with pid 74676 00:13:31.106 11:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74676 00:13:31.106 [2024-11-20 11:23:14.058376] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:31.106 [2024-11-20 11:23:14.058513] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.106 11:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74676 00:13:31.106 [2024-11-20 11:23:14.058599] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.106 [2024-11-20 11:23:14.058613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:31.366 [2024-11-20 11:23:14.474005] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:32.768 11:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:32.768 00:13:32.768 real 0m8.810s 00:13:32.768 user 0m13.870s 00:13:32.768 sys 0m1.618s 00:13:32.768 11:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.768 11:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.768 ************************************ 00:13:32.768 END TEST raid_superblock_test 00:13:32.768 ************************************ 00:13:32.768 11:23:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:13:32.768 11:23:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:32.768 11:23:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.768 11:23:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:32.768 ************************************ 00:13:32.768 START TEST raid_read_error_test 00:13:32.768 ************************************ 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YjtJEQ4u9b 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75163 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75163 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75163 ']' 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:32.768 11:23:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.768 [2024-11-20 11:23:15.792176] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:13:32.768 [2024-11-20 11:23:15.792384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75163 ] 00:13:33.027 [2024-11-20 11:23:15.944791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.027 [2024-11-20 11:23:16.054541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.286 [2024-11-20 11:23:16.253847] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.287 [2024-11-20 11:23:16.253881] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.546 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.546 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:33.546 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:33.546 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:33.546 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.546 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.546 BaseBdev1_malloc 00:13:33.546 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.546 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:33.805 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.805 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.805 true 00:13:33.805 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.805 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:33.805 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.805 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.805 [2024-11-20 11:23:16.678118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:33.805 [2024-11-20 11:23:16.678223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.805 [2024-11-20 11:23:16.678260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:33.805 [2024-11-20 11:23:16.678289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.805 [2024-11-20 11:23:16.680399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.805 [2024-11-20 11:23:16.680495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:33.805 BaseBdev1 00:13:33.805 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.805 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:33.805 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:33.805 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.805 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.805 BaseBdev2_malloc 00:13:33.805 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.805 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:33.805 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.805 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.805 true 00:13:33.805 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.805 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:33.805 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.806 [2024-11-20 11:23:16.744594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:33.806 [2024-11-20 11:23:16.744737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.806 [2024-11-20 11:23:16.744788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:33.806 [2024-11-20 11:23:16.744825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.806 [2024-11-20 11:23:16.747211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.806 [2024-11-20 11:23:16.747300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:33.806 BaseBdev2 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.806 BaseBdev3_malloc 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.806 true 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.806 [2024-11-20 11:23:16.834351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:33.806 [2024-11-20 11:23:16.834481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.806 [2024-11-20 11:23:16.834518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:33.806 [2024-11-20 11:23:16.834561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.806 [2024-11-20 11:23:16.836685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.806 [2024-11-20 11:23:16.836761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:33.806 BaseBdev3 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.806 BaseBdev4_malloc 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.806 true 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.806 [2024-11-20 11:23:16.900850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:33.806 [2024-11-20 11:23:16.900957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.806 [2024-11-20 11:23:16.900979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:33.806 [2024-11-20 11:23:16.900990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.806 [2024-11-20 11:23:16.903141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.806 [2024-11-20 11:23:16.903186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:33.806 BaseBdev4 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.806 [2024-11-20 11:23:16.912909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:33.806 [2024-11-20 11:23:16.914799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:33.806 [2024-11-20 11:23:16.914922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:33.806 [2024-11-20 11:23:16.915025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:33.806 [2024-11-20 11:23:16.915331] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:33.806 [2024-11-20 11:23:16.915387] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:33.806 [2024-11-20 11:23:16.915689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:33.806 [2024-11-20 11:23:16.915905] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:33.806 [2024-11-20 11:23:16.915947] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:33.806 [2024-11-20 11:23:16.916159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.806 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.065 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.065 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.065 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.065 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.065 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.065 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.065 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.065 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.065 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.065 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.065 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.065 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.065 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.065 "name": "raid_bdev1", 00:13:34.065 "uuid": "8fbec2d8-1da5-4530-b927-c4a5e57866dc", 00:13:34.065 "strip_size_kb": 0, 00:13:34.065 "state": "online", 00:13:34.065 "raid_level": "raid1", 00:13:34.065 "superblock": true, 00:13:34.065 "num_base_bdevs": 4, 00:13:34.065 "num_base_bdevs_discovered": 4, 00:13:34.065 "num_base_bdevs_operational": 4, 00:13:34.065 "base_bdevs_list": [ 00:13:34.065 { 00:13:34.065 "name": "BaseBdev1", 00:13:34.065 "uuid": "6abfced3-d079-5d0b-9db9-942e5c4ce703", 00:13:34.065 "is_configured": true, 00:13:34.065 "data_offset": 2048, 00:13:34.065 "data_size": 63488 00:13:34.065 }, 00:13:34.065 { 00:13:34.065 "name": "BaseBdev2", 00:13:34.065 "uuid": "ff2d53f1-bc4a-5d33-b291-65a3c9e01415", 00:13:34.065 "is_configured": true, 00:13:34.065 "data_offset": 2048, 00:13:34.065 "data_size": 63488 00:13:34.065 }, 00:13:34.065 { 00:13:34.065 "name": "BaseBdev3", 00:13:34.065 "uuid": "8f701080-4481-5de4-a5c2-3da910db5a74", 00:13:34.065 "is_configured": true, 00:13:34.065 "data_offset": 2048, 00:13:34.065 "data_size": 63488 00:13:34.065 }, 00:13:34.065 { 00:13:34.065 "name": "BaseBdev4", 00:13:34.065 "uuid": "942d8b55-0550-57cd-9c78-511435044dc8", 00:13:34.065 "is_configured": true, 00:13:34.065 "data_offset": 2048, 00:13:34.065 "data_size": 63488 00:13:34.065 } 00:13:34.066 ] 00:13:34.066 }' 00:13:34.066 11:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.066 11:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.326 11:23:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:34.326 11:23:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:34.586 [2024-11-20 11:23:17.493233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.525 "name": "raid_bdev1", 00:13:35.525 "uuid": "8fbec2d8-1da5-4530-b927-c4a5e57866dc", 00:13:35.525 "strip_size_kb": 0, 00:13:35.525 "state": "online", 00:13:35.525 "raid_level": "raid1", 00:13:35.525 "superblock": true, 00:13:35.525 "num_base_bdevs": 4, 00:13:35.525 "num_base_bdevs_discovered": 4, 00:13:35.525 "num_base_bdevs_operational": 4, 00:13:35.525 "base_bdevs_list": [ 00:13:35.525 { 00:13:35.525 "name": "BaseBdev1", 00:13:35.525 "uuid": "6abfced3-d079-5d0b-9db9-942e5c4ce703", 00:13:35.525 "is_configured": true, 00:13:35.525 "data_offset": 2048, 00:13:35.525 "data_size": 63488 00:13:35.525 }, 00:13:35.525 { 00:13:35.525 "name": "BaseBdev2", 00:13:35.525 "uuid": "ff2d53f1-bc4a-5d33-b291-65a3c9e01415", 00:13:35.525 "is_configured": true, 00:13:35.525 "data_offset": 2048, 00:13:35.525 "data_size": 63488 00:13:35.525 }, 00:13:35.525 { 00:13:35.525 "name": "BaseBdev3", 00:13:35.525 "uuid": "8f701080-4481-5de4-a5c2-3da910db5a74", 00:13:35.525 "is_configured": true, 00:13:35.525 "data_offset": 2048, 00:13:35.525 "data_size": 63488 00:13:35.525 }, 00:13:35.525 { 00:13:35.525 "name": "BaseBdev4", 00:13:35.525 "uuid": "942d8b55-0550-57cd-9c78-511435044dc8", 00:13:35.525 "is_configured": true, 00:13:35.525 "data_offset": 2048, 00:13:35.525 "data_size": 63488 00:13:35.525 } 00:13:35.525 ] 00:13:35.525 }' 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.525 11:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.787 11:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:35.787 11:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.787 11:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.787 [2024-11-20 11:23:18.857641] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:35.787 [2024-11-20 11:23:18.857675] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:35.787 [2024-11-20 11:23:18.860548] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:35.787 [2024-11-20 11:23:18.860681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.787 [2024-11-20 11:23:18.860824] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:35.787 [2024-11-20 11:23:18.860838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:35.787 { 00:13:35.787 "results": [ 00:13:35.787 { 00:13:35.787 "job": "raid_bdev1", 00:13:35.787 "core_mask": "0x1", 00:13:35.787 "workload": "randrw", 00:13:35.787 "percentage": 50, 00:13:35.787 "status": "finished", 00:13:35.787 "queue_depth": 1, 00:13:35.787 "io_size": 131072, 00:13:35.787 "runtime": 1.365184, 00:13:35.787 "iops": 10525.321128873471, 00:13:35.787 "mibps": 1315.6651411091839, 00:13:35.787 "io_failed": 0, 00:13:35.787 "io_timeout": 0, 00:13:35.787 "avg_latency_us": 92.35027006525753, 00:13:35.787 "min_latency_us": 23.02882096069869, 00:13:35.787 "max_latency_us": 1523.926637554585 00:13:35.787 } 00:13:35.787 ], 00:13:35.787 "core_count": 1 00:13:35.787 } 00:13:35.787 11:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.787 11:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75163 00:13:35.787 11:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75163 ']' 00:13:35.787 11:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75163 00:13:35.787 11:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:35.787 11:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:35.787 11:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75163 00:13:36.065 11:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:36.065 11:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:36.065 11:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75163' 00:13:36.065 killing process with pid 75163 00:13:36.065 11:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75163 00:13:36.065 [2024-11-20 11:23:18.906845] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:36.065 11:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75163 00:13:36.324 [2024-11-20 11:23:19.225886] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:37.703 11:23:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:37.703 11:23:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YjtJEQ4u9b 00:13:37.703 11:23:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:37.703 ************************************ 00:13:37.703 END TEST raid_read_error_test 00:13:37.703 ************************************ 00:13:37.703 11:23:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:37.703 11:23:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:37.703 11:23:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:37.703 11:23:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:37.703 11:23:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:37.703 00:13:37.703 real 0m4.714s 00:13:37.703 user 0m5.579s 00:13:37.703 sys 0m0.581s 00:13:37.703 11:23:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.703 11:23:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.703 11:23:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:13:37.703 11:23:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:37.703 11:23:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.703 11:23:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:37.703 ************************************ 00:13:37.703 START TEST raid_write_error_test 00:13:37.703 ************************************ 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TLIRseLzHV 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75309 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75309 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75309 ']' 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:37.703 11:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.703 [2024-11-20 11:23:20.578361] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:13:37.704 [2024-11-20 11:23:20.578583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75309 ] 00:13:37.704 [2024-11-20 11:23:20.755777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.962 [2024-11-20 11:23:20.874344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.221 [2024-11-20 11:23:21.084246] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.221 [2024-11-20 11:23:21.084320] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.480 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:38.480 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:38.480 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:38.480 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:38.480 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.480 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.480 BaseBdev1_malloc 00:13:38.480 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.480 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:38.480 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.480 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.480 true 00:13:38.481 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.481 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:38.481 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.481 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.481 [2024-11-20 11:23:21.499334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:38.481 [2024-11-20 11:23:21.499443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.481 [2024-11-20 11:23:21.499519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:38.481 [2024-11-20 11:23:21.499558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.481 [2024-11-20 11:23:21.501888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.481 [2024-11-20 11:23:21.501975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:38.481 BaseBdev1 00:13:38.481 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.481 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:38.481 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:38.481 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.481 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.481 BaseBdev2_malloc 00:13:38.481 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.481 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:38.481 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.481 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.481 true 00:13:38.481 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.481 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:38.481 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.481 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.481 [2024-11-20 11:23:21.567950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:38.481 [2024-11-20 11:23:21.568015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.481 [2024-11-20 11:23:21.568051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:38.481 [2024-11-20 11:23:21.568063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.481 [2024-11-20 11:23:21.570334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.481 [2024-11-20 11:23:21.570381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:38.481 BaseBdev2 00:13:38.481 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.481 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:38.481 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:38.481 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.481 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.748 BaseBdev3_malloc 00:13:38.748 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.748 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:38.748 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.748 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.748 true 00:13:38.748 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.749 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:38.749 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.749 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.749 [2024-11-20 11:23:21.647250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:38.749 [2024-11-20 11:23:21.647383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.749 [2024-11-20 11:23:21.647430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:38.749 [2024-11-20 11:23:21.647484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.749 [2024-11-20 11:23:21.650016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.749 [2024-11-20 11:23:21.650107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:38.749 BaseBdev3 00:13:38.749 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.749 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:38.749 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:38.749 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.749 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.749 BaseBdev4_malloc 00:13:38.749 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.749 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:38.749 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.749 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.749 true 00:13:38.749 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.749 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:38.749 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.749 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.750 [2024-11-20 11:23:21.715010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:38.750 [2024-11-20 11:23:21.715154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.750 [2024-11-20 11:23:21.715211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:38.750 [2024-11-20 11:23:21.715248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.750 [2024-11-20 11:23:21.717831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.750 [2024-11-20 11:23:21.717936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:38.750 BaseBdev4 00:13:38.750 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.750 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:38.750 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.750 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.750 [2024-11-20 11:23:21.727023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:38.750 [2024-11-20 11:23:21.729031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:38.750 [2024-11-20 11:23:21.729153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:38.750 [2024-11-20 11:23:21.729261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:38.750 [2024-11-20 11:23:21.729560] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:38.750 [2024-11-20 11:23:21.729617] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:38.750 [2024-11-20 11:23:21.729936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:38.750 [2024-11-20 11:23:21.730175] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:38.750 [2024-11-20 11:23:21.730221] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:38.750 [2024-11-20 11:23:21.730432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.751 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.751 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:38.751 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.751 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.751 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.751 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.751 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.751 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.751 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.751 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.751 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.751 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.751 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.751 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.751 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.751 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.751 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.751 "name": "raid_bdev1", 00:13:38.751 "uuid": "c92bbb03-c41e-44b0-8b4d-91e4c5ec3643", 00:13:38.751 "strip_size_kb": 0, 00:13:38.751 "state": "online", 00:13:38.751 "raid_level": "raid1", 00:13:38.751 "superblock": true, 00:13:38.751 "num_base_bdevs": 4, 00:13:38.751 "num_base_bdevs_discovered": 4, 00:13:38.751 "num_base_bdevs_operational": 4, 00:13:38.751 "base_bdevs_list": [ 00:13:38.751 { 00:13:38.751 "name": "BaseBdev1", 00:13:38.751 "uuid": "7c3758d6-583a-5493-9a6d-e39461bfab48", 00:13:38.751 "is_configured": true, 00:13:38.751 "data_offset": 2048, 00:13:38.751 "data_size": 63488 00:13:38.751 }, 00:13:38.751 { 00:13:38.751 "name": "BaseBdev2", 00:13:38.751 "uuid": "4c4210dc-14aa-5708-ae0f-2a22701faebe", 00:13:38.751 "is_configured": true, 00:13:38.751 "data_offset": 2048, 00:13:38.751 "data_size": 63488 00:13:38.751 }, 00:13:38.751 { 00:13:38.751 "name": "BaseBdev3", 00:13:38.751 "uuid": "ca921ae9-2486-5e2f-afaf-7cedcb547159", 00:13:38.751 "is_configured": true, 00:13:38.752 "data_offset": 2048, 00:13:38.752 "data_size": 63488 00:13:38.752 }, 00:13:38.752 { 00:13:38.752 "name": "BaseBdev4", 00:13:38.752 "uuid": "b689ec75-1831-5aa7-941d-78d37b5ea8f2", 00:13:38.752 "is_configured": true, 00:13:38.752 "data_offset": 2048, 00:13:38.752 "data_size": 63488 00:13:38.752 } 00:13:38.752 ] 00:13:38.752 }' 00:13:38.752 11:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.752 11:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.325 11:23:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:39.325 11:23:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:39.325 [2024-11-20 11:23:22.331340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.262 [2024-11-20 11:23:23.241851] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:40.262 [2024-11-20 11:23:23.241993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:40.262 [2024-11-20 11:23:23.242251] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.262 "name": "raid_bdev1", 00:13:40.262 "uuid": "c92bbb03-c41e-44b0-8b4d-91e4c5ec3643", 00:13:40.262 "strip_size_kb": 0, 00:13:40.262 "state": "online", 00:13:40.262 "raid_level": "raid1", 00:13:40.262 "superblock": true, 00:13:40.262 "num_base_bdevs": 4, 00:13:40.262 "num_base_bdevs_discovered": 3, 00:13:40.262 "num_base_bdevs_operational": 3, 00:13:40.262 "base_bdevs_list": [ 00:13:40.262 { 00:13:40.262 "name": null, 00:13:40.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.262 "is_configured": false, 00:13:40.262 "data_offset": 0, 00:13:40.262 "data_size": 63488 00:13:40.262 }, 00:13:40.262 { 00:13:40.262 "name": "BaseBdev2", 00:13:40.262 "uuid": "4c4210dc-14aa-5708-ae0f-2a22701faebe", 00:13:40.262 "is_configured": true, 00:13:40.262 "data_offset": 2048, 00:13:40.262 "data_size": 63488 00:13:40.262 }, 00:13:40.262 { 00:13:40.262 "name": "BaseBdev3", 00:13:40.262 "uuid": "ca921ae9-2486-5e2f-afaf-7cedcb547159", 00:13:40.262 "is_configured": true, 00:13:40.262 "data_offset": 2048, 00:13:40.262 "data_size": 63488 00:13:40.262 }, 00:13:40.262 { 00:13:40.262 "name": "BaseBdev4", 00:13:40.262 "uuid": "b689ec75-1831-5aa7-941d-78d37b5ea8f2", 00:13:40.262 "is_configured": true, 00:13:40.262 "data_offset": 2048, 00:13:40.262 "data_size": 63488 00:13:40.262 } 00:13:40.262 ] 00:13:40.262 }' 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.262 11:23:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.827 11:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:40.827 11:23:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.827 11:23:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.827 [2024-11-20 11:23:23.734762] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:40.827 [2024-11-20 11:23:23.734851] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:40.827 [2024-11-20 11:23:23.737679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:40.827 [2024-11-20 11:23:23.737769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.827 [2024-11-20 11:23:23.737896] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:40.827 [2024-11-20 11:23:23.737943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:40.827 { 00:13:40.827 "results": [ 00:13:40.827 { 00:13:40.827 "job": "raid_bdev1", 00:13:40.827 "core_mask": "0x1", 00:13:40.827 "workload": "randrw", 00:13:40.827 "percentage": 50, 00:13:40.827 "status": "finished", 00:13:40.827 "queue_depth": 1, 00:13:40.827 "io_size": 131072, 00:13:40.827 "runtime": 1.404206, 00:13:40.827 "iops": 11077.43450747255, 00:13:40.827 "mibps": 1384.6793134340687, 00:13:40.827 "io_failed": 0, 00:13:40.827 "io_timeout": 0, 00:13:40.827 "avg_latency_us": 87.49762911994205, 00:13:40.827 "min_latency_us": 23.811353711790392, 00:13:40.827 "max_latency_us": 1502.46288209607 00:13:40.827 } 00:13:40.827 ], 00:13:40.827 "core_count": 1 00:13:40.827 } 00:13:40.827 11:23:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.827 11:23:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75309 00:13:40.827 11:23:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75309 ']' 00:13:40.827 11:23:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75309 00:13:40.827 11:23:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:40.827 11:23:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:40.827 11:23:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75309 00:13:40.827 11:23:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:40.827 11:23:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:40.827 killing process with pid 75309 00:13:40.827 11:23:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75309' 00:13:40.827 11:23:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75309 00:13:40.827 [2024-11-20 11:23:23.784706] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:40.827 11:23:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75309 00:13:41.085 [2024-11-20 11:23:24.129262] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:42.464 11:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TLIRseLzHV 00:13:42.464 11:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:42.464 11:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:42.464 11:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:42.464 11:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:42.464 11:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:42.464 11:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:42.464 11:23:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:42.464 00:13:42.464 real 0m4.851s 00:13:42.464 user 0m5.780s 00:13:42.464 sys 0m0.623s 00:13:42.464 ************************************ 00:13:42.464 END TEST raid_write_error_test 00:13:42.464 ************************************ 00:13:42.464 11:23:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.464 11:23:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.464 11:23:25 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:13:42.464 11:23:25 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:42.464 11:23:25 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:13:42.464 11:23:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:42.464 11:23:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.464 11:23:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:42.464 ************************************ 00:13:42.464 START TEST raid_rebuild_test 00:13:42.464 ************************************ 00:13:42.464 11:23:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:13:42.464 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:42.464 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:42.464 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:42.464 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:42.464 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75458 00:13:42.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75458 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75458 ']' 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.465 11:23:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.465 [2024-11-20 11:23:25.493103] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:13:42.465 [2024-11-20 11:23:25.493324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:42.465 Zero copy mechanism will not be used. 00:13:42.465 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75458 ] 00:13:42.724 [2024-11-20 11:23:25.666598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.724 [2024-11-20 11:23:25.783816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.984 [2024-11-20 11:23:25.988707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:42.984 [2024-11-20 11:23:25.988860] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.244 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:43.244 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:43.244 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:43.244 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:43.244 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.244 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.505 BaseBdev1_malloc 00:13:43.505 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.505 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:43.505 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.505 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.505 [2024-11-20 11:23:26.374549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:43.505 [2024-11-20 11:23:26.374683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.505 [2024-11-20 11:23:26.374715] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:43.505 [2024-11-20 11:23:26.374727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.505 [2024-11-20 11:23:26.376861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.505 [2024-11-20 11:23:26.376903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:43.505 BaseBdev1 00:13:43.505 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.505 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:43.505 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:43.505 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.505 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.505 BaseBdev2_malloc 00:13:43.505 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.505 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:43.505 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.505 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.505 [2024-11-20 11:23:26.429150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:43.505 [2024-11-20 11:23:26.429286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.505 [2024-11-20 11:23:26.429340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:43.505 [2024-11-20 11:23:26.429375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.505 [2024-11-20 11:23:26.431630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.505 [2024-11-20 11:23:26.431723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:43.505 BaseBdev2 00:13:43.505 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.505 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:43.505 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.505 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.505 spare_malloc 00:13:43.505 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.505 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.506 spare_delay 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.506 [2024-11-20 11:23:26.510638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:43.506 [2024-11-20 11:23:26.510709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.506 [2024-11-20 11:23:26.510732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:43.506 [2024-11-20 11:23:26.510744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.506 [2024-11-20 11:23:26.513056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.506 [2024-11-20 11:23:26.513101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:43.506 spare 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.506 [2024-11-20 11:23:26.522682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:43.506 [2024-11-20 11:23:26.524825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:43.506 [2024-11-20 11:23:26.524983] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:43.506 [2024-11-20 11:23:26.525031] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:43.506 [2024-11-20 11:23:26.525373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:43.506 [2024-11-20 11:23:26.525619] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:43.506 [2024-11-20 11:23:26.525669] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:43.506 [2024-11-20 11:23:26.525905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.506 "name": "raid_bdev1", 00:13:43.506 "uuid": "1ebe833d-46eb-444d-966e-5b81b598f7a0", 00:13:43.506 "strip_size_kb": 0, 00:13:43.506 "state": "online", 00:13:43.506 "raid_level": "raid1", 00:13:43.506 "superblock": false, 00:13:43.506 "num_base_bdevs": 2, 00:13:43.506 "num_base_bdevs_discovered": 2, 00:13:43.506 "num_base_bdevs_operational": 2, 00:13:43.506 "base_bdevs_list": [ 00:13:43.506 { 00:13:43.506 "name": "BaseBdev1", 00:13:43.506 "uuid": "62c9c824-810d-52a9-b2b7-37afa2257e57", 00:13:43.506 "is_configured": true, 00:13:43.506 "data_offset": 0, 00:13:43.506 "data_size": 65536 00:13:43.506 }, 00:13:43.506 { 00:13:43.506 "name": "BaseBdev2", 00:13:43.506 "uuid": "f5eb255e-7a16-50aa-96d7-cf0064b0f407", 00:13:43.506 "is_configured": true, 00:13:43.506 "data_offset": 0, 00:13:43.506 "data_size": 65536 00:13:43.506 } 00:13:43.506 ] 00:13:43.506 }' 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.506 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.075 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:44.075 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.075 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.075 11:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:44.075 [2024-11-20 11:23:26.978208] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.075 11:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.075 11:23:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:44.075 11:23:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.075 11:23:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.075 11:23:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.075 11:23:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:44.075 11:23:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.076 11:23:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:44.076 11:23:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:44.076 11:23:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:44.076 11:23:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:44.076 11:23:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:44.076 11:23:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.076 11:23:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:44.076 11:23:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:44.076 11:23:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:44.076 11:23:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:44.076 11:23:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:44.076 11:23:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:44.076 11:23:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.076 11:23:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:44.334 [2024-11-20 11:23:27.281465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:44.334 /dev/nbd0 00:13:44.334 11:23:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:44.334 11:23:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:44.334 11:23:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:44.334 11:23:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:44.334 11:23:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:44.334 11:23:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:44.334 11:23:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:44.334 11:23:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:44.334 11:23:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:44.334 11:23:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:44.334 11:23:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:44.334 1+0 records in 00:13:44.334 1+0 records out 00:13:44.334 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412134 s, 9.9 MB/s 00:13:44.334 11:23:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.334 11:23:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:44.334 11:23:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.334 11:23:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:44.334 11:23:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:44.334 11:23:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.334 11:23:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.334 11:23:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:44.334 11:23:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:44.335 11:23:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:48.623 65536+0 records in 00:13:48.623 65536+0 records out 00:13:48.623 33554432 bytes (34 MB, 32 MiB) copied, 4.28612 s, 7.8 MB/s 00:13:48.623 11:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:48.623 11:23:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:48.623 11:23:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:48.623 11:23:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:48.623 11:23:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:48.623 11:23:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:48.623 11:23:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:48.883 [2024-11-20 11:23:31.850114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.883 [2024-11-20 11:23:31.892341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.883 "name": "raid_bdev1", 00:13:48.883 "uuid": "1ebe833d-46eb-444d-966e-5b81b598f7a0", 00:13:48.883 "strip_size_kb": 0, 00:13:48.883 "state": "online", 00:13:48.883 "raid_level": "raid1", 00:13:48.883 "superblock": false, 00:13:48.883 "num_base_bdevs": 2, 00:13:48.883 "num_base_bdevs_discovered": 1, 00:13:48.883 "num_base_bdevs_operational": 1, 00:13:48.883 "base_bdevs_list": [ 00:13:48.883 { 00:13:48.883 "name": null, 00:13:48.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.883 "is_configured": false, 00:13:48.883 "data_offset": 0, 00:13:48.883 "data_size": 65536 00:13:48.883 }, 00:13:48.883 { 00:13:48.883 "name": "BaseBdev2", 00:13:48.883 "uuid": "f5eb255e-7a16-50aa-96d7-cf0064b0f407", 00:13:48.883 "is_configured": true, 00:13:48.883 "data_offset": 0, 00:13:48.883 "data_size": 65536 00:13:48.883 } 00:13:48.883 ] 00:13:48.883 }' 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.883 11:23:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.547 11:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:49.547 11:23:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.547 11:23:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.547 [2024-11-20 11:23:32.379605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:49.547 [2024-11-20 11:23:32.396730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:13:49.547 11:23:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.547 11:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:49.547 [2024-11-20 11:23:32.398800] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:50.485 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:50.485 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.485 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:50.485 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:50.485 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.485 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.485 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.485 11:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.485 11:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.485 11:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.485 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.485 "name": "raid_bdev1", 00:13:50.485 "uuid": "1ebe833d-46eb-444d-966e-5b81b598f7a0", 00:13:50.485 "strip_size_kb": 0, 00:13:50.485 "state": "online", 00:13:50.485 "raid_level": "raid1", 00:13:50.485 "superblock": false, 00:13:50.485 "num_base_bdevs": 2, 00:13:50.485 "num_base_bdevs_discovered": 2, 00:13:50.485 "num_base_bdevs_operational": 2, 00:13:50.485 "process": { 00:13:50.485 "type": "rebuild", 00:13:50.485 "target": "spare", 00:13:50.485 "progress": { 00:13:50.485 "blocks": 20480, 00:13:50.485 "percent": 31 00:13:50.485 } 00:13:50.485 }, 00:13:50.485 "base_bdevs_list": [ 00:13:50.485 { 00:13:50.485 "name": "spare", 00:13:50.485 "uuid": "4cf3da9e-2cc0-5e19-9139-0d671195501b", 00:13:50.485 "is_configured": true, 00:13:50.485 "data_offset": 0, 00:13:50.485 "data_size": 65536 00:13:50.485 }, 00:13:50.485 { 00:13:50.485 "name": "BaseBdev2", 00:13:50.485 "uuid": "f5eb255e-7a16-50aa-96d7-cf0064b0f407", 00:13:50.485 "is_configured": true, 00:13:50.485 "data_offset": 0, 00:13:50.485 "data_size": 65536 00:13:50.485 } 00:13:50.485 ] 00:13:50.485 }' 00:13:50.485 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.485 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:50.485 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.485 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:50.485 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:50.485 11:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.485 11:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.485 [2024-11-20 11:23:33.546475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:50.745 [2024-11-20 11:23:33.605074] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:50.745 [2024-11-20 11:23:33.605257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.745 [2024-11-20 11:23:33.605295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:50.745 [2024-11-20 11:23:33.605320] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:50.745 11:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.745 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:50.745 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.745 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.745 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.745 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.745 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:50.745 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.745 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.745 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.745 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.745 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.745 11:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.745 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.745 11:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.745 11:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.745 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.745 "name": "raid_bdev1", 00:13:50.745 "uuid": "1ebe833d-46eb-444d-966e-5b81b598f7a0", 00:13:50.745 "strip_size_kb": 0, 00:13:50.745 "state": "online", 00:13:50.745 "raid_level": "raid1", 00:13:50.745 "superblock": false, 00:13:50.745 "num_base_bdevs": 2, 00:13:50.745 "num_base_bdevs_discovered": 1, 00:13:50.745 "num_base_bdevs_operational": 1, 00:13:50.745 "base_bdevs_list": [ 00:13:50.745 { 00:13:50.745 "name": null, 00:13:50.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.745 "is_configured": false, 00:13:50.745 "data_offset": 0, 00:13:50.745 "data_size": 65536 00:13:50.745 }, 00:13:50.745 { 00:13:50.745 "name": "BaseBdev2", 00:13:50.745 "uuid": "f5eb255e-7a16-50aa-96d7-cf0064b0f407", 00:13:50.745 "is_configured": true, 00:13:50.745 "data_offset": 0, 00:13:50.745 "data_size": 65536 00:13:50.745 } 00:13:50.745 ] 00:13:50.745 }' 00:13:50.745 11:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.745 11:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.328 11:23:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:51.328 11:23:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.328 11:23:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:51.328 11:23:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:51.328 11:23:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.328 11:23:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.328 11:23:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.328 11:23:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.328 11:23:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.328 11:23:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.328 11:23:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.328 "name": "raid_bdev1", 00:13:51.328 "uuid": "1ebe833d-46eb-444d-966e-5b81b598f7a0", 00:13:51.328 "strip_size_kb": 0, 00:13:51.328 "state": "online", 00:13:51.328 "raid_level": "raid1", 00:13:51.328 "superblock": false, 00:13:51.328 "num_base_bdevs": 2, 00:13:51.328 "num_base_bdevs_discovered": 1, 00:13:51.328 "num_base_bdevs_operational": 1, 00:13:51.328 "base_bdevs_list": [ 00:13:51.328 { 00:13:51.328 "name": null, 00:13:51.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.328 "is_configured": false, 00:13:51.328 "data_offset": 0, 00:13:51.328 "data_size": 65536 00:13:51.328 }, 00:13:51.328 { 00:13:51.328 "name": "BaseBdev2", 00:13:51.328 "uuid": "f5eb255e-7a16-50aa-96d7-cf0064b0f407", 00:13:51.328 "is_configured": true, 00:13:51.328 "data_offset": 0, 00:13:51.328 "data_size": 65536 00:13:51.328 } 00:13:51.328 ] 00:13:51.328 }' 00:13:51.328 11:23:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.328 11:23:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:51.328 11:23:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.328 11:23:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:51.328 11:23:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:51.328 11:23:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.328 11:23:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.328 [2024-11-20 11:23:34.301986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:51.328 [2024-11-20 11:23:34.321002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:13:51.328 11:23:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.328 11:23:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:51.328 [2024-11-20 11:23:34.323165] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:52.268 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.268 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.268 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.268 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.268 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.269 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.269 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.269 11:23:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.269 11:23:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.269 11:23:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.529 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.529 "name": "raid_bdev1", 00:13:52.529 "uuid": "1ebe833d-46eb-444d-966e-5b81b598f7a0", 00:13:52.529 "strip_size_kb": 0, 00:13:52.529 "state": "online", 00:13:52.529 "raid_level": "raid1", 00:13:52.529 "superblock": false, 00:13:52.529 "num_base_bdevs": 2, 00:13:52.529 "num_base_bdevs_discovered": 2, 00:13:52.529 "num_base_bdevs_operational": 2, 00:13:52.529 "process": { 00:13:52.529 "type": "rebuild", 00:13:52.529 "target": "spare", 00:13:52.529 "progress": { 00:13:52.529 "blocks": 20480, 00:13:52.529 "percent": 31 00:13:52.529 } 00:13:52.529 }, 00:13:52.529 "base_bdevs_list": [ 00:13:52.529 { 00:13:52.529 "name": "spare", 00:13:52.529 "uuid": "4cf3da9e-2cc0-5e19-9139-0d671195501b", 00:13:52.529 "is_configured": true, 00:13:52.529 "data_offset": 0, 00:13:52.529 "data_size": 65536 00:13:52.529 }, 00:13:52.529 { 00:13:52.529 "name": "BaseBdev2", 00:13:52.529 "uuid": "f5eb255e-7a16-50aa-96d7-cf0064b0f407", 00:13:52.529 "is_configured": true, 00:13:52.529 "data_offset": 0, 00:13:52.529 "data_size": 65536 00:13:52.529 } 00:13:52.529 ] 00:13:52.529 }' 00:13:52.529 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.529 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.529 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.529 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.529 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:52.529 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:52.529 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:52.529 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:52.529 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=381 00:13:52.529 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:52.529 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.529 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.529 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.529 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.529 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.529 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.529 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.529 11:23:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.529 11:23:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.529 11:23:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.529 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.529 "name": "raid_bdev1", 00:13:52.529 "uuid": "1ebe833d-46eb-444d-966e-5b81b598f7a0", 00:13:52.529 "strip_size_kb": 0, 00:13:52.529 "state": "online", 00:13:52.529 "raid_level": "raid1", 00:13:52.529 "superblock": false, 00:13:52.529 "num_base_bdevs": 2, 00:13:52.529 "num_base_bdevs_discovered": 2, 00:13:52.529 "num_base_bdevs_operational": 2, 00:13:52.529 "process": { 00:13:52.529 "type": "rebuild", 00:13:52.529 "target": "spare", 00:13:52.529 "progress": { 00:13:52.529 "blocks": 22528, 00:13:52.529 "percent": 34 00:13:52.529 } 00:13:52.529 }, 00:13:52.529 "base_bdevs_list": [ 00:13:52.529 { 00:13:52.529 "name": "spare", 00:13:52.529 "uuid": "4cf3da9e-2cc0-5e19-9139-0d671195501b", 00:13:52.529 "is_configured": true, 00:13:52.529 "data_offset": 0, 00:13:52.529 "data_size": 65536 00:13:52.529 }, 00:13:52.529 { 00:13:52.529 "name": "BaseBdev2", 00:13:52.529 "uuid": "f5eb255e-7a16-50aa-96d7-cf0064b0f407", 00:13:52.529 "is_configured": true, 00:13:52.529 "data_offset": 0, 00:13:52.529 "data_size": 65536 00:13:52.529 } 00:13:52.529 ] 00:13:52.529 }' 00:13:52.529 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.529 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.529 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.789 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.789 11:23:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:53.733 11:23:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:53.733 11:23:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.733 11:23:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.733 11:23:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.733 11:23:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.733 11:23:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.733 11:23:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.733 11:23:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.733 11:23:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.733 11:23:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.733 11:23:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.733 11:23:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.733 "name": "raid_bdev1", 00:13:53.733 "uuid": "1ebe833d-46eb-444d-966e-5b81b598f7a0", 00:13:53.733 "strip_size_kb": 0, 00:13:53.733 "state": "online", 00:13:53.733 "raid_level": "raid1", 00:13:53.733 "superblock": false, 00:13:53.733 "num_base_bdevs": 2, 00:13:53.733 "num_base_bdevs_discovered": 2, 00:13:53.733 "num_base_bdevs_operational": 2, 00:13:53.733 "process": { 00:13:53.733 "type": "rebuild", 00:13:53.733 "target": "spare", 00:13:53.733 "progress": { 00:13:53.733 "blocks": 47104, 00:13:53.733 "percent": 71 00:13:53.733 } 00:13:53.733 }, 00:13:53.733 "base_bdevs_list": [ 00:13:53.733 { 00:13:53.733 "name": "spare", 00:13:53.733 "uuid": "4cf3da9e-2cc0-5e19-9139-0d671195501b", 00:13:53.733 "is_configured": true, 00:13:53.733 "data_offset": 0, 00:13:53.733 "data_size": 65536 00:13:53.733 }, 00:13:53.733 { 00:13:53.733 "name": "BaseBdev2", 00:13:53.733 "uuid": "f5eb255e-7a16-50aa-96d7-cf0064b0f407", 00:13:53.733 "is_configured": true, 00:13:53.733 "data_offset": 0, 00:13:53.733 "data_size": 65536 00:13:53.733 } 00:13:53.733 ] 00:13:53.733 }' 00:13:53.733 11:23:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.733 11:23:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:53.733 11:23:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.733 11:23:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:53.733 11:23:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:54.697 [2024-11-20 11:23:37.538961] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:54.698 [2024-11-20 11:23:37.539146] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:54.698 [2024-11-20 11:23:37.539199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.698 11:23:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:54.698 11:23:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.698 11:23:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.698 11:23:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.698 11:23:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.698 11:23:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.698 11:23:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.698 11:23:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.698 11:23:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.698 11:23:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.970 11:23:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.970 11:23:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.970 "name": "raid_bdev1", 00:13:54.970 "uuid": "1ebe833d-46eb-444d-966e-5b81b598f7a0", 00:13:54.970 "strip_size_kb": 0, 00:13:54.970 "state": "online", 00:13:54.970 "raid_level": "raid1", 00:13:54.970 "superblock": false, 00:13:54.970 "num_base_bdevs": 2, 00:13:54.970 "num_base_bdevs_discovered": 2, 00:13:54.970 "num_base_bdevs_operational": 2, 00:13:54.970 "base_bdevs_list": [ 00:13:54.970 { 00:13:54.970 "name": "spare", 00:13:54.970 "uuid": "4cf3da9e-2cc0-5e19-9139-0d671195501b", 00:13:54.970 "is_configured": true, 00:13:54.970 "data_offset": 0, 00:13:54.970 "data_size": 65536 00:13:54.970 }, 00:13:54.970 { 00:13:54.970 "name": "BaseBdev2", 00:13:54.970 "uuid": "f5eb255e-7a16-50aa-96d7-cf0064b0f407", 00:13:54.970 "is_configured": true, 00:13:54.970 "data_offset": 0, 00:13:54.970 "data_size": 65536 00:13:54.970 } 00:13:54.970 ] 00:13:54.970 }' 00:13:54.970 11:23:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.970 11:23:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:54.970 11:23:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.970 11:23:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:54.970 11:23:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:54.970 11:23:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:54.970 11:23:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.970 11:23:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:54.970 11:23:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:54.970 11:23:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.970 11:23:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.970 11:23:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.970 11:23:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.970 11:23:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.970 11:23:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.970 11:23:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.970 "name": "raid_bdev1", 00:13:54.970 "uuid": "1ebe833d-46eb-444d-966e-5b81b598f7a0", 00:13:54.970 "strip_size_kb": 0, 00:13:54.970 "state": "online", 00:13:54.970 "raid_level": "raid1", 00:13:54.970 "superblock": false, 00:13:54.970 "num_base_bdevs": 2, 00:13:54.970 "num_base_bdevs_discovered": 2, 00:13:54.970 "num_base_bdevs_operational": 2, 00:13:54.970 "base_bdevs_list": [ 00:13:54.970 { 00:13:54.970 "name": "spare", 00:13:54.970 "uuid": "4cf3da9e-2cc0-5e19-9139-0d671195501b", 00:13:54.970 "is_configured": true, 00:13:54.970 "data_offset": 0, 00:13:54.970 "data_size": 65536 00:13:54.970 }, 00:13:54.970 { 00:13:54.970 "name": "BaseBdev2", 00:13:54.970 "uuid": "f5eb255e-7a16-50aa-96d7-cf0064b0f407", 00:13:54.970 "is_configured": true, 00:13:54.970 "data_offset": 0, 00:13:54.970 "data_size": 65536 00:13:54.970 } 00:13:54.970 ] 00:13:54.970 }' 00:13:54.970 11:23:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.970 11:23:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:54.970 11:23:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.970 11:23:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:54.971 11:23:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:54.971 11:23:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.971 11:23:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.971 11:23:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.971 11:23:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.971 11:23:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:54.971 11:23:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.971 11:23:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.971 11:23:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.971 11:23:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.971 11:23:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.971 11:23:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.244 11:23:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.244 11:23:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.244 11:23:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.244 11:23:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.244 "name": "raid_bdev1", 00:13:55.244 "uuid": "1ebe833d-46eb-444d-966e-5b81b598f7a0", 00:13:55.244 "strip_size_kb": 0, 00:13:55.244 "state": "online", 00:13:55.244 "raid_level": "raid1", 00:13:55.244 "superblock": false, 00:13:55.244 "num_base_bdevs": 2, 00:13:55.244 "num_base_bdevs_discovered": 2, 00:13:55.244 "num_base_bdevs_operational": 2, 00:13:55.244 "base_bdevs_list": [ 00:13:55.244 { 00:13:55.244 "name": "spare", 00:13:55.244 "uuid": "4cf3da9e-2cc0-5e19-9139-0d671195501b", 00:13:55.244 "is_configured": true, 00:13:55.245 "data_offset": 0, 00:13:55.245 "data_size": 65536 00:13:55.245 }, 00:13:55.245 { 00:13:55.245 "name": "BaseBdev2", 00:13:55.245 "uuid": "f5eb255e-7a16-50aa-96d7-cf0064b0f407", 00:13:55.245 "is_configured": true, 00:13:55.245 "data_offset": 0, 00:13:55.245 "data_size": 65536 00:13:55.245 } 00:13:55.245 ] 00:13:55.245 }' 00:13:55.245 11:23:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.245 11:23:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.504 11:23:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:55.504 11:23:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.504 11:23:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.504 [2024-11-20 11:23:38.529907] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:55.504 [2024-11-20 11:23:38.529993] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:55.504 [2024-11-20 11:23:38.530107] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:55.504 [2024-11-20 11:23:38.530205] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:55.504 [2024-11-20 11:23:38.530251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:55.505 11:23:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.505 11:23:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.505 11:23:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.505 11:23:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.505 11:23:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:55.505 11:23:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.505 11:23:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:55.505 11:23:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:55.505 11:23:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:55.505 11:23:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:55.505 11:23:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:55.505 11:23:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:55.505 11:23:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:55.505 11:23:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:55.505 11:23:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:55.505 11:23:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:55.505 11:23:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:55.505 11:23:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:55.505 11:23:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:55.764 /dev/nbd0 00:13:55.764 11:23:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:55.764 11:23:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:55.764 11:23:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:55.764 11:23:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:55.764 11:23:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:55.764 11:23:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:55.764 11:23:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:55.764 11:23:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:55.764 11:23:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:55.764 11:23:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:55.764 11:23:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:55.764 1+0 records in 00:13:55.764 1+0 records out 00:13:55.764 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000492194 s, 8.3 MB/s 00:13:55.764 11:23:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.764 11:23:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:55.764 11:23:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.764 11:23:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:55.764 11:23:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:55.764 11:23:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:55.764 11:23:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:55.764 11:23:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:56.023 /dev/nbd1 00:13:56.281 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:56.281 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:56.281 11:23:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:56.281 11:23:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:56.281 11:23:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:56.281 11:23:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:56.282 11:23:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:56.282 11:23:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:56.282 11:23:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:56.282 11:23:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:56.282 11:23:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:56.282 1+0 records in 00:13:56.282 1+0 records out 00:13:56.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352667 s, 11.6 MB/s 00:13:56.282 11:23:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.282 11:23:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:56.282 11:23:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.282 11:23:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:56.282 11:23:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:56.282 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:56.282 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:56.282 11:23:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:56.282 11:23:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:56.282 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:56.282 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:56.282 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:56.282 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:56.282 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:56.282 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:56.539 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:56.539 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:56.539 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:56.539 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:56.539 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:56.539 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:56.539 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:56.539 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:56.539 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:56.539 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:56.797 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:56.797 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:56.797 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:56.797 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:56.797 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:56.797 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:56.797 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:56.797 11:23:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:56.797 11:23:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:56.797 11:23:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75458 00:13:56.797 11:23:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75458 ']' 00:13:56.797 11:23:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75458 00:13:56.797 11:23:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:56.797 11:23:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:56.797 11:23:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75458 00:13:57.055 11:23:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:57.055 11:23:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:57.055 11:23:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75458' 00:13:57.055 killing process with pid 75458 00:13:57.055 11:23:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75458 00:13:57.055 Received shutdown signal, test time was about 60.000000 seconds 00:13:57.055 00:13:57.055 Latency(us) 00:13:57.055 [2024-11-20T11:23:40.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.055 [2024-11-20T11:23:40.171Z] =================================================================================================================== 00:13:57.055 [2024-11-20T11:23:40.171Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:57.055 11:23:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75458 00:13:57.055 [2024-11-20 11:23:39.947090] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:57.314 [2024-11-20 11:23:40.272510] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:58.692 00:13:58.692 real 0m16.018s 00:13:58.692 user 0m18.402s 00:13:58.692 sys 0m3.215s 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.692 ************************************ 00:13:58.692 END TEST raid_rebuild_test 00:13:58.692 ************************************ 00:13:58.692 11:23:41 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:13:58.692 11:23:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:58.692 11:23:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.692 11:23:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:58.692 ************************************ 00:13:58.692 START TEST raid_rebuild_test_sb 00:13:58.692 ************************************ 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:58.692 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75883 00:13:58.693 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:58.693 11:23:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75883 00:13:58.693 11:23:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75883 ']' 00:13:58.693 11:23:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.693 11:23:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:58.693 11:23:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.693 11:23:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:58.693 11:23:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.693 [2024-11-20 11:23:41.585980] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:13:58.693 [2024-11-20 11:23:41.586199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:58.693 Zero copy mechanism will not be used. 00:13:58.693 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75883 ] 00:13:58.693 [2024-11-20 11:23:41.762292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.952 [2024-11-20 11:23:41.881289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.213 [2024-11-20 11:23:42.084433] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.213 [2024-11-20 11:23:42.084609] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.472 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.472 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:59.472 11:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:59.472 11:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:59.472 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.472 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.472 BaseBdev1_malloc 00:13:59.472 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.472 11:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:59.472 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.472 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.472 [2024-11-20 11:23:42.496497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:59.473 [2024-11-20 11:23:42.496578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.473 [2024-11-20 11:23:42.496604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:59.473 [2024-11-20 11:23:42.496617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.473 [2024-11-20 11:23:42.498810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.473 [2024-11-20 11:23:42.498851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:59.473 BaseBdev1 00:13:59.473 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.473 11:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:59.473 11:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:59.473 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.473 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.473 BaseBdev2_malloc 00:13:59.473 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.473 11:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:59.473 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.473 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.473 [2024-11-20 11:23:42.551151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:59.473 [2024-11-20 11:23:42.551209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.473 [2024-11-20 11:23:42.551228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:59.473 [2024-11-20 11:23:42.551240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.473 [2024-11-20 11:23:42.553275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.473 [2024-11-20 11:23:42.553317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:59.473 BaseBdev2 00:13:59.473 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.473 11:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:59.473 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.473 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.733 spare_malloc 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.733 spare_delay 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.733 [2024-11-20 11:23:42.629121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:59.733 [2024-11-20 11:23:42.629184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.733 [2024-11-20 11:23:42.629204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:59.733 [2024-11-20 11:23:42.629216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.733 [2024-11-20 11:23:42.631370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.733 [2024-11-20 11:23:42.631504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:59.733 spare 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.733 [2024-11-20 11:23:42.637159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:59.733 [2024-11-20 11:23:42.638854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:59.733 [2024-11-20 11:23:42.639008] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:59.733 [2024-11-20 11:23:42.639025] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:59.733 [2024-11-20 11:23:42.639248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:59.733 [2024-11-20 11:23:42.639396] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:59.733 [2024-11-20 11:23:42.639404] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:59.733 [2024-11-20 11:23:42.639611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.733 "name": "raid_bdev1", 00:13:59.733 "uuid": "76d5709b-4e23-4a3c-9c21-4309362c5abc", 00:13:59.733 "strip_size_kb": 0, 00:13:59.733 "state": "online", 00:13:59.733 "raid_level": "raid1", 00:13:59.733 "superblock": true, 00:13:59.733 "num_base_bdevs": 2, 00:13:59.733 "num_base_bdevs_discovered": 2, 00:13:59.733 "num_base_bdevs_operational": 2, 00:13:59.733 "base_bdevs_list": [ 00:13:59.733 { 00:13:59.733 "name": "BaseBdev1", 00:13:59.733 "uuid": "8c5253ed-207d-5958-b62d-844ccae64920", 00:13:59.733 "is_configured": true, 00:13:59.733 "data_offset": 2048, 00:13:59.733 "data_size": 63488 00:13:59.733 }, 00:13:59.733 { 00:13:59.733 "name": "BaseBdev2", 00:13:59.733 "uuid": "3c21311e-fd53-587d-8ddc-fd13f15390f0", 00:13:59.733 "is_configured": true, 00:13:59.733 "data_offset": 2048, 00:13:59.733 "data_size": 63488 00:13:59.733 } 00:13:59.733 ] 00:13:59.733 }' 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.733 11:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.303 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:00.303 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:00.303 11:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.303 11:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.303 [2024-11-20 11:23:43.120647] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:00.303 11:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.303 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:00.303 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.303 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:00.303 11:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.303 11:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.303 11:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.303 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:00.303 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:00.303 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:00.303 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:00.303 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:00.303 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.303 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:00.303 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:00.303 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:00.303 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:00.303 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:00.303 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:00.303 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.303 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:00.303 [2024-11-20 11:23:43.403924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:00.562 /dev/nbd0 00:14:00.562 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:00.562 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:00.562 11:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:00.562 11:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:00.562 11:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:00.562 11:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:00.562 11:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:00.562 11:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:00.562 11:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:00.562 11:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:00.562 11:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:00.562 1+0 records in 00:14:00.562 1+0 records out 00:14:00.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241767 s, 16.9 MB/s 00:14:00.562 11:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.562 11:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:00.562 11:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.562 11:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:00.562 11:23:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:00.562 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:00.562 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.562 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:00.562 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:00.562 11:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:04.755 63488+0 records in 00:14:04.755 63488+0 records out 00:14:04.755 32505856 bytes (33 MB, 31 MiB) copied, 4.26839 s, 7.6 MB/s 00:14:04.755 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:04.755 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:04.755 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:04.755 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:04.755 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:04.756 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:04.756 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:05.015 [2024-11-20 11:23:47.934333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.016 [2024-11-20 11:23:47.966375] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.016 11:23:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.016 11:23:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.016 "name": "raid_bdev1", 00:14:05.016 "uuid": "76d5709b-4e23-4a3c-9c21-4309362c5abc", 00:14:05.016 "strip_size_kb": 0, 00:14:05.016 "state": "online", 00:14:05.016 "raid_level": "raid1", 00:14:05.016 "superblock": true, 00:14:05.016 "num_base_bdevs": 2, 00:14:05.016 "num_base_bdevs_discovered": 1, 00:14:05.016 "num_base_bdevs_operational": 1, 00:14:05.016 "base_bdevs_list": [ 00:14:05.016 { 00:14:05.016 "name": null, 00:14:05.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.016 "is_configured": false, 00:14:05.016 "data_offset": 0, 00:14:05.016 "data_size": 63488 00:14:05.016 }, 00:14:05.016 { 00:14:05.016 "name": "BaseBdev2", 00:14:05.016 "uuid": "3c21311e-fd53-587d-8ddc-fd13f15390f0", 00:14:05.016 "is_configured": true, 00:14:05.016 "data_offset": 2048, 00:14:05.016 "data_size": 63488 00:14:05.016 } 00:14:05.016 ] 00:14:05.016 }' 00:14:05.016 11:23:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.016 11:23:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.585 11:23:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:05.585 11:23:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.585 11:23:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.585 [2024-11-20 11:23:48.425622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:05.585 [2024-11-20 11:23:48.444119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:14:05.585 11:23:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.585 [2024-11-20 11:23:48.446057] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:05.585 11:23:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:06.524 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.524 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.524 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.524 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.524 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.524 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.524 11:23:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.524 11:23:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.524 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.524 11:23:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.524 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.524 "name": "raid_bdev1", 00:14:06.524 "uuid": "76d5709b-4e23-4a3c-9c21-4309362c5abc", 00:14:06.524 "strip_size_kb": 0, 00:14:06.524 "state": "online", 00:14:06.524 "raid_level": "raid1", 00:14:06.524 "superblock": true, 00:14:06.524 "num_base_bdevs": 2, 00:14:06.524 "num_base_bdevs_discovered": 2, 00:14:06.524 "num_base_bdevs_operational": 2, 00:14:06.524 "process": { 00:14:06.524 "type": "rebuild", 00:14:06.524 "target": "spare", 00:14:06.524 "progress": { 00:14:06.524 "blocks": 20480, 00:14:06.524 "percent": 32 00:14:06.524 } 00:14:06.524 }, 00:14:06.524 "base_bdevs_list": [ 00:14:06.524 { 00:14:06.524 "name": "spare", 00:14:06.524 "uuid": "48844476-60a9-581b-8d64-f305d0cd878f", 00:14:06.524 "is_configured": true, 00:14:06.524 "data_offset": 2048, 00:14:06.524 "data_size": 63488 00:14:06.524 }, 00:14:06.524 { 00:14:06.524 "name": "BaseBdev2", 00:14:06.524 "uuid": "3c21311e-fd53-587d-8ddc-fd13f15390f0", 00:14:06.524 "is_configured": true, 00:14:06.524 "data_offset": 2048, 00:14:06.524 "data_size": 63488 00:14:06.524 } 00:14:06.524 ] 00:14:06.524 }' 00:14:06.524 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.524 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.524 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.524 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.524 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:06.524 11:23:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.524 11:23:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.524 [2024-11-20 11:23:49.589597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.859 [2024-11-20 11:23:49.652048] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:06.859 [2024-11-20 11:23:49.652199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.859 [2024-11-20 11:23:49.652218] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.859 [2024-11-20 11:23:49.652232] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:06.859 11:23:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.859 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:06.859 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.859 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.859 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.859 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.859 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:06.859 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.859 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.859 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.859 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.859 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.859 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.859 11:23:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.859 11:23:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.859 11:23:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.859 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.859 "name": "raid_bdev1", 00:14:06.859 "uuid": "76d5709b-4e23-4a3c-9c21-4309362c5abc", 00:14:06.859 "strip_size_kb": 0, 00:14:06.859 "state": "online", 00:14:06.859 "raid_level": "raid1", 00:14:06.859 "superblock": true, 00:14:06.859 "num_base_bdevs": 2, 00:14:06.859 "num_base_bdevs_discovered": 1, 00:14:06.859 "num_base_bdevs_operational": 1, 00:14:06.859 "base_bdevs_list": [ 00:14:06.859 { 00:14:06.859 "name": null, 00:14:06.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.859 "is_configured": false, 00:14:06.859 "data_offset": 0, 00:14:06.859 "data_size": 63488 00:14:06.859 }, 00:14:06.859 { 00:14:06.859 "name": "BaseBdev2", 00:14:06.859 "uuid": "3c21311e-fd53-587d-8ddc-fd13f15390f0", 00:14:06.859 "is_configured": true, 00:14:06.859 "data_offset": 2048, 00:14:06.859 "data_size": 63488 00:14:06.859 } 00:14:06.859 ] 00:14:06.859 }' 00:14:06.859 11:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.859 11:23:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.119 11:23:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:07.119 11:23:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.119 11:23:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:07.119 11:23:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:07.119 11:23:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.119 11:23:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.119 11:23:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.119 11:23:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.119 11:23:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.119 11:23:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.119 11:23:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.119 "name": "raid_bdev1", 00:14:07.119 "uuid": "76d5709b-4e23-4a3c-9c21-4309362c5abc", 00:14:07.119 "strip_size_kb": 0, 00:14:07.119 "state": "online", 00:14:07.119 "raid_level": "raid1", 00:14:07.119 "superblock": true, 00:14:07.119 "num_base_bdevs": 2, 00:14:07.119 "num_base_bdevs_discovered": 1, 00:14:07.119 "num_base_bdevs_operational": 1, 00:14:07.119 "base_bdevs_list": [ 00:14:07.119 { 00:14:07.119 "name": null, 00:14:07.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.119 "is_configured": false, 00:14:07.119 "data_offset": 0, 00:14:07.119 "data_size": 63488 00:14:07.119 }, 00:14:07.119 { 00:14:07.119 "name": "BaseBdev2", 00:14:07.119 "uuid": "3c21311e-fd53-587d-8ddc-fd13f15390f0", 00:14:07.119 "is_configured": true, 00:14:07.119 "data_offset": 2048, 00:14:07.119 "data_size": 63488 00:14:07.119 } 00:14:07.119 ] 00:14:07.119 }' 00:14:07.119 11:23:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.377 11:23:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:07.377 11:23:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.377 11:23:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:07.377 11:23:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:07.377 11:23:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.377 11:23:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.377 [2024-11-20 11:23:50.300695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:07.377 [2024-11-20 11:23:50.317817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:14:07.377 11:23:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.377 11:23:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:07.377 [2024-11-20 11:23:50.319805] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:08.312 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.312 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.312 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.312 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.312 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.312 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.312 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.312 11:23:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.312 11:23:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.312 11:23:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.312 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.312 "name": "raid_bdev1", 00:14:08.312 "uuid": "76d5709b-4e23-4a3c-9c21-4309362c5abc", 00:14:08.312 "strip_size_kb": 0, 00:14:08.312 "state": "online", 00:14:08.312 "raid_level": "raid1", 00:14:08.312 "superblock": true, 00:14:08.312 "num_base_bdevs": 2, 00:14:08.312 "num_base_bdevs_discovered": 2, 00:14:08.312 "num_base_bdevs_operational": 2, 00:14:08.312 "process": { 00:14:08.312 "type": "rebuild", 00:14:08.312 "target": "spare", 00:14:08.312 "progress": { 00:14:08.312 "blocks": 20480, 00:14:08.312 "percent": 32 00:14:08.312 } 00:14:08.312 }, 00:14:08.312 "base_bdevs_list": [ 00:14:08.312 { 00:14:08.312 "name": "spare", 00:14:08.312 "uuid": "48844476-60a9-581b-8d64-f305d0cd878f", 00:14:08.312 "is_configured": true, 00:14:08.312 "data_offset": 2048, 00:14:08.312 "data_size": 63488 00:14:08.312 }, 00:14:08.312 { 00:14:08.312 "name": "BaseBdev2", 00:14:08.312 "uuid": "3c21311e-fd53-587d-8ddc-fd13f15390f0", 00:14:08.312 "is_configured": true, 00:14:08.312 "data_offset": 2048, 00:14:08.312 "data_size": 63488 00:14:08.312 } 00:14:08.312 ] 00:14:08.312 }' 00:14:08.312 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.312 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:08.571 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.571 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.571 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:08.571 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:08.571 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:08.571 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:08.571 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:08.571 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:08.571 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=397 00:14:08.571 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:08.571 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.571 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.571 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.571 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.571 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.571 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.571 11:23:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.571 11:23:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.571 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.571 11:23:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.571 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.571 "name": "raid_bdev1", 00:14:08.571 "uuid": "76d5709b-4e23-4a3c-9c21-4309362c5abc", 00:14:08.571 "strip_size_kb": 0, 00:14:08.571 "state": "online", 00:14:08.571 "raid_level": "raid1", 00:14:08.571 "superblock": true, 00:14:08.571 "num_base_bdevs": 2, 00:14:08.571 "num_base_bdevs_discovered": 2, 00:14:08.571 "num_base_bdevs_operational": 2, 00:14:08.571 "process": { 00:14:08.571 "type": "rebuild", 00:14:08.571 "target": "spare", 00:14:08.571 "progress": { 00:14:08.571 "blocks": 22528, 00:14:08.571 "percent": 35 00:14:08.571 } 00:14:08.571 }, 00:14:08.571 "base_bdevs_list": [ 00:14:08.571 { 00:14:08.571 "name": "spare", 00:14:08.571 "uuid": "48844476-60a9-581b-8d64-f305d0cd878f", 00:14:08.571 "is_configured": true, 00:14:08.571 "data_offset": 2048, 00:14:08.571 "data_size": 63488 00:14:08.571 }, 00:14:08.571 { 00:14:08.571 "name": "BaseBdev2", 00:14:08.571 "uuid": "3c21311e-fd53-587d-8ddc-fd13f15390f0", 00:14:08.571 "is_configured": true, 00:14:08.571 "data_offset": 2048, 00:14:08.571 "data_size": 63488 00:14:08.571 } 00:14:08.571 ] 00:14:08.571 }' 00:14:08.571 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.571 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:08.571 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.571 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.571 11:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:09.508 11:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:09.508 11:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.508 11:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.508 11:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.508 11:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.508 11:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.508 11:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.508 11:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.508 11:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.508 11:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.508 11:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.768 11:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.768 "name": "raid_bdev1", 00:14:09.768 "uuid": "76d5709b-4e23-4a3c-9c21-4309362c5abc", 00:14:09.769 "strip_size_kb": 0, 00:14:09.769 "state": "online", 00:14:09.769 "raid_level": "raid1", 00:14:09.769 "superblock": true, 00:14:09.769 "num_base_bdevs": 2, 00:14:09.769 "num_base_bdevs_discovered": 2, 00:14:09.769 "num_base_bdevs_operational": 2, 00:14:09.769 "process": { 00:14:09.769 "type": "rebuild", 00:14:09.769 "target": "spare", 00:14:09.769 "progress": { 00:14:09.769 "blocks": 45056, 00:14:09.769 "percent": 70 00:14:09.769 } 00:14:09.769 }, 00:14:09.769 "base_bdevs_list": [ 00:14:09.769 { 00:14:09.769 "name": "spare", 00:14:09.769 "uuid": "48844476-60a9-581b-8d64-f305d0cd878f", 00:14:09.769 "is_configured": true, 00:14:09.769 "data_offset": 2048, 00:14:09.769 "data_size": 63488 00:14:09.769 }, 00:14:09.769 { 00:14:09.769 "name": "BaseBdev2", 00:14:09.769 "uuid": "3c21311e-fd53-587d-8ddc-fd13f15390f0", 00:14:09.769 "is_configured": true, 00:14:09.769 "data_offset": 2048, 00:14:09.769 "data_size": 63488 00:14:09.769 } 00:14:09.769 ] 00:14:09.769 }' 00:14:09.769 11:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.769 11:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.769 11:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.769 11:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.769 11:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:10.366 [2024-11-20 11:23:53.434726] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:10.366 [2024-11-20 11:23:53.434847] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:10.366 [2024-11-20 11:23:53.435052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.937 11:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:10.937 11:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.937 11:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.937 11:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.937 11:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.937 11:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.937 11:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.937 11:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.937 11:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.937 11:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.937 11:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.937 11:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.937 "name": "raid_bdev1", 00:14:10.937 "uuid": "76d5709b-4e23-4a3c-9c21-4309362c5abc", 00:14:10.937 "strip_size_kb": 0, 00:14:10.937 "state": "online", 00:14:10.937 "raid_level": "raid1", 00:14:10.937 "superblock": true, 00:14:10.937 "num_base_bdevs": 2, 00:14:10.937 "num_base_bdevs_discovered": 2, 00:14:10.937 "num_base_bdevs_operational": 2, 00:14:10.937 "base_bdevs_list": [ 00:14:10.937 { 00:14:10.937 "name": "spare", 00:14:10.937 "uuid": "48844476-60a9-581b-8d64-f305d0cd878f", 00:14:10.937 "is_configured": true, 00:14:10.937 "data_offset": 2048, 00:14:10.937 "data_size": 63488 00:14:10.937 }, 00:14:10.937 { 00:14:10.937 "name": "BaseBdev2", 00:14:10.937 "uuid": "3c21311e-fd53-587d-8ddc-fd13f15390f0", 00:14:10.937 "is_configured": true, 00:14:10.937 "data_offset": 2048, 00:14:10.937 "data_size": 63488 00:14:10.937 } 00:14:10.937 ] 00:14:10.937 }' 00:14:10.937 11:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.938 11:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:10.938 11:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.938 11:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:10.938 11:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:10.938 11:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:10.938 11:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.938 11:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:10.938 11:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:10.938 11:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.938 11:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.938 11:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.938 11:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.938 11:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.938 11:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.938 11:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.938 "name": "raid_bdev1", 00:14:10.938 "uuid": "76d5709b-4e23-4a3c-9c21-4309362c5abc", 00:14:10.938 "strip_size_kb": 0, 00:14:10.938 "state": "online", 00:14:10.938 "raid_level": "raid1", 00:14:10.938 "superblock": true, 00:14:10.938 "num_base_bdevs": 2, 00:14:10.938 "num_base_bdevs_discovered": 2, 00:14:10.938 "num_base_bdevs_operational": 2, 00:14:10.938 "base_bdevs_list": [ 00:14:10.938 { 00:14:10.938 "name": "spare", 00:14:10.938 "uuid": "48844476-60a9-581b-8d64-f305d0cd878f", 00:14:10.938 "is_configured": true, 00:14:10.938 "data_offset": 2048, 00:14:10.938 "data_size": 63488 00:14:10.938 }, 00:14:10.938 { 00:14:10.938 "name": "BaseBdev2", 00:14:10.938 "uuid": "3c21311e-fd53-587d-8ddc-fd13f15390f0", 00:14:10.938 "is_configured": true, 00:14:10.938 "data_offset": 2048, 00:14:10.938 "data_size": 63488 00:14:10.938 } 00:14:10.938 ] 00:14:10.938 }' 00:14:10.938 11:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.938 11:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:10.938 11:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.938 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:10.938 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:10.938 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.938 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.938 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.938 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.938 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:10.938 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.938 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.938 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.938 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.938 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.938 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.938 11:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.938 11:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.938 11:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.199 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.199 "name": "raid_bdev1", 00:14:11.199 "uuid": "76d5709b-4e23-4a3c-9c21-4309362c5abc", 00:14:11.199 "strip_size_kb": 0, 00:14:11.199 "state": "online", 00:14:11.199 "raid_level": "raid1", 00:14:11.199 "superblock": true, 00:14:11.199 "num_base_bdevs": 2, 00:14:11.199 "num_base_bdevs_discovered": 2, 00:14:11.199 "num_base_bdevs_operational": 2, 00:14:11.199 "base_bdevs_list": [ 00:14:11.199 { 00:14:11.199 "name": "spare", 00:14:11.199 "uuid": "48844476-60a9-581b-8d64-f305d0cd878f", 00:14:11.199 "is_configured": true, 00:14:11.199 "data_offset": 2048, 00:14:11.199 "data_size": 63488 00:14:11.199 }, 00:14:11.199 { 00:14:11.199 "name": "BaseBdev2", 00:14:11.199 "uuid": "3c21311e-fd53-587d-8ddc-fd13f15390f0", 00:14:11.199 "is_configured": true, 00:14:11.199 "data_offset": 2048, 00:14:11.199 "data_size": 63488 00:14:11.199 } 00:14:11.199 ] 00:14:11.199 }' 00:14:11.199 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.199 11:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.457 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:11.457 11:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.457 11:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.457 [2024-11-20 11:23:54.458539] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:11.457 [2024-11-20 11:23:54.458582] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:11.457 [2024-11-20 11:23:54.458693] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:11.457 [2024-11-20 11:23:54.458772] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:11.458 [2024-11-20 11:23:54.458784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:11.458 11:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.458 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.458 11:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.458 11:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.458 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:11.458 11:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.458 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:11.458 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:11.458 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:11.458 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:11.458 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:11.458 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:11.458 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:11.458 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:11.458 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:11.458 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:11.458 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:11.458 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:11.458 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:11.718 /dev/nbd0 00:14:11.718 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:11.718 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:11.718 11:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:11.718 11:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:11.718 11:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:11.718 11:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:11.718 11:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:11.718 11:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:11.718 11:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:11.718 11:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:11.718 11:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:11.718 1+0 records in 00:14:11.718 1+0 records out 00:14:11.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203726 s, 20.1 MB/s 00:14:11.718 11:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.718 11:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:11.718 11:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.718 11:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:11.718 11:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:11.718 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:11.718 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:11.718 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:11.978 /dev/nbd1 00:14:11.978 11:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:11.978 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:11.978 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:11.978 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:11.978 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:11.978 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:11.978 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:11.978 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:11.978 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:11.978 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:11.978 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:11.978 1+0 records in 00:14:11.978 1+0 records out 00:14:11.978 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409829 s, 10.0 MB/s 00:14:11.978 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.978 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:11.978 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.978 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:11.978 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:11.978 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:11.978 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:11.978 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:12.237 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:12.237 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:12.237 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:12.237 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:12.237 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:12.237 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.237 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:12.497 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:12.497 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:12.497 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:12.497 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.497 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.497 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:12.497 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:12.497 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.497 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.497 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.804 [2024-11-20 11:23:55.676994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:12.804 [2024-11-20 11:23:55.677055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.804 [2024-11-20 11:23:55.677080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:12.804 [2024-11-20 11:23:55.677090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.804 [2024-11-20 11:23:55.679295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.804 [2024-11-20 11:23:55.679333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:12.804 [2024-11-20 11:23:55.679429] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:12.804 [2024-11-20 11:23:55.679505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:12.804 [2024-11-20 11:23:55.679691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:12.804 spare 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.804 [2024-11-20 11:23:55.779644] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:12.804 [2024-11-20 11:23:55.779717] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:12.804 [2024-11-20 11:23:55.780067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:14:12.804 [2024-11-20 11:23:55.780298] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:12.804 [2024-11-20 11:23:55.780314] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:12.804 [2024-11-20 11:23:55.780552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.804 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.804 "name": "raid_bdev1", 00:14:12.804 "uuid": "76d5709b-4e23-4a3c-9c21-4309362c5abc", 00:14:12.804 "strip_size_kb": 0, 00:14:12.804 "state": "online", 00:14:12.804 "raid_level": "raid1", 00:14:12.804 "superblock": true, 00:14:12.804 "num_base_bdevs": 2, 00:14:12.804 "num_base_bdevs_discovered": 2, 00:14:12.804 "num_base_bdevs_operational": 2, 00:14:12.804 "base_bdevs_list": [ 00:14:12.804 { 00:14:12.804 "name": "spare", 00:14:12.804 "uuid": "48844476-60a9-581b-8d64-f305d0cd878f", 00:14:12.804 "is_configured": true, 00:14:12.804 "data_offset": 2048, 00:14:12.804 "data_size": 63488 00:14:12.804 }, 00:14:12.804 { 00:14:12.804 "name": "BaseBdev2", 00:14:12.804 "uuid": "3c21311e-fd53-587d-8ddc-fd13f15390f0", 00:14:12.804 "is_configured": true, 00:14:12.804 "data_offset": 2048, 00:14:12.804 "data_size": 63488 00:14:12.804 } 00:14:12.804 ] 00:14:12.804 }' 00:14:12.805 11:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.805 11:23:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.372 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:13.372 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.372 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:13.372 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.373 "name": "raid_bdev1", 00:14:13.373 "uuid": "76d5709b-4e23-4a3c-9c21-4309362c5abc", 00:14:13.373 "strip_size_kb": 0, 00:14:13.373 "state": "online", 00:14:13.373 "raid_level": "raid1", 00:14:13.373 "superblock": true, 00:14:13.373 "num_base_bdevs": 2, 00:14:13.373 "num_base_bdevs_discovered": 2, 00:14:13.373 "num_base_bdevs_operational": 2, 00:14:13.373 "base_bdevs_list": [ 00:14:13.373 { 00:14:13.373 "name": "spare", 00:14:13.373 "uuid": "48844476-60a9-581b-8d64-f305d0cd878f", 00:14:13.373 "is_configured": true, 00:14:13.373 "data_offset": 2048, 00:14:13.373 "data_size": 63488 00:14:13.373 }, 00:14:13.373 { 00:14:13.373 "name": "BaseBdev2", 00:14:13.373 "uuid": "3c21311e-fd53-587d-8ddc-fd13f15390f0", 00:14:13.373 "is_configured": true, 00:14:13.373 "data_offset": 2048, 00:14:13.373 "data_size": 63488 00:14:13.373 } 00:14:13.373 ] 00:14:13.373 }' 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.373 [2024-11-20 11:23:56.351952] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.373 "name": "raid_bdev1", 00:14:13.373 "uuid": "76d5709b-4e23-4a3c-9c21-4309362c5abc", 00:14:13.373 "strip_size_kb": 0, 00:14:13.373 "state": "online", 00:14:13.373 "raid_level": "raid1", 00:14:13.373 "superblock": true, 00:14:13.373 "num_base_bdevs": 2, 00:14:13.373 "num_base_bdevs_discovered": 1, 00:14:13.373 "num_base_bdevs_operational": 1, 00:14:13.373 "base_bdevs_list": [ 00:14:13.373 { 00:14:13.373 "name": null, 00:14:13.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.373 "is_configured": false, 00:14:13.373 "data_offset": 0, 00:14:13.373 "data_size": 63488 00:14:13.373 }, 00:14:13.373 { 00:14:13.373 "name": "BaseBdev2", 00:14:13.373 "uuid": "3c21311e-fd53-587d-8ddc-fd13f15390f0", 00:14:13.373 "is_configured": true, 00:14:13.373 "data_offset": 2048, 00:14:13.373 "data_size": 63488 00:14:13.373 } 00:14:13.373 ] 00:14:13.373 }' 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.373 11:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.942 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:13.942 11:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.942 11:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.942 [2024-11-20 11:23:56.799279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:13.942 [2024-11-20 11:23:56.799530] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:13.942 [2024-11-20 11:23:56.799552] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:13.942 [2024-11-20 11:23:56.799596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:13.942 [2024-11-20 11:23:56.817279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:14:13.942 11:23:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.942 11:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:13.942 [2024-11-20 11:23:56.819408] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:14.883 11:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.883 11:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.883 11:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.883 11:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.883 11:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.883 11:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.883 11:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.883 11:23:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.883 11:23:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.883 11:23:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.883 11:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.883 "name": "raid_bdev1", 00:14:14.883 "uuid": "76d5709b-4e23-4a3c-9c21-4309362c5abc", 00:14:14.883 "strip_size_kb": 0, 00:14:14.883 "state": "online", 00:14:14.883 "raid_level": "raid1", 00:14:14.883 "superblock": true, 00:14:14.883 "num_base_bdevs": 2, 00:14:14.883 "num_base_bdevs_discovered": 2, 00:14:14.883 "num_base_bdevs_operational": 2, 00:14:14.883 "process": { 00:14:14.883 "type": "rebuild", 00:14:14.883 "target": "spare", 00:14:14.883 "progress": { 00:14:14.883 "blocks": 20480, 00:14:14.883 "percent": 32 00:14:14.883 } 00:14:14.883 }, 00:14:14.883 "base_bdevs_list": [ 00:14:14.883 { 00:14:14.883 "name": "spare", 00:14:14.883 "uuid": "48844476-60a9-581b-8d64-f305d0cd878f", 00:14:14.883 "is_configured": true, 00:14:14.883 "data_offset": 2048, 00:14:14.883 "data_size": 63488 00:14:14.883 }, 00:14:14.883 { 00:14:14.883 "name": "BaseBdev2", 00:14:14.883 "uuid": "3c21311e-fd53-587d-8ddc-fd13f15390f0", 00:14:14.883 "is_configured": true, 00:14:14.883 "data_offset": 2048, 00:14:14.883 "data_size": 63488 00:14:14.883 } 00:14:14.883 ] 00:14:14.883 }' 00:14:14.883 11:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.883 11:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.883 11:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.883 11:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.883 11:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:14.883 11:23:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.883 11:23:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.883 [2024-11-20 11:23:57.971187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:15.143 [2024-11-20 11:23:58.025563] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:15.143 [2024-11-20 11:23:58.025668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.143 [2024-11-20 11:23:58.025684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:15.143 [2024-11-20 11:23:58.025694] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:15.143 11:23:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.143 11:23:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:15.143 11:23:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.143 11:23:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.143 11:23:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.143 11:23:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.143 11:23:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:15.143 11:23:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.143 11:23:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.143 11:23:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.143 11:23:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.143 11:23:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.143 11:23:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.143 11:23:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.143 11:23:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.143 11:23:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.144 11:23:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.144 "name": "raid_bdev1", 00:14:15.144 "uuid": "76d5709b-4e23-4a3c-9c21-4309362c5abc", 00:14:15.144 "strip_size_kb": 0, 00:14:15.144 "state": "online", 00:14:15.144 "raid_level": "raid1", 00:14:15.144 "superblock": true, 00:14:15.144 "num_base_bdevs": 2, 00:14:15.144 "num_base_bdevs_discovered": 1, 00:14:15.144 "num_base_bdevs_operational": 1, 00:14:15.144 "base_bdevs_list": [ 00:14:15.144 { 00:14:15.144 "name": null, 00:14:15.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.144 "is_configured": false, 00:14:15.144 "data_offset": 0, 00:14:15.144 "data_size": 63488 00:14:15.144 }, 00:14:15.144 { 00:14:15.144 "name": "BaseBdev2", 00:14:15.144 "uuid": "3c21311e-fd53-587d-8ddc-fd13f15390f0", 00:14:15.144 "is_configured": true, 00:14:15.144 "data_offset": 2048, 00:14:15.144 "data_size": 63488 00:14:15.144 } 00:14:15.144 ] 00:14:15.144 }' 00:14:15.144 11:23:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.144 11:23:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.712 11:23:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:15.712 11:23:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.712 11:23:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.712 [2024-11-20 11:23:58.522421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:15.712 [2024-11-20 11:23:58.522506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.712 [2024-11-20 11:23:58.522530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:15.712 [2024-11-20 11:23:58.522542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.712 [2024-11-20 11:23:58.523063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.712 [2024-11-20 11:23:58.523096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:15.712 [2024-11-20 11:23:58.523203] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:15.712 [2024-11-20 11:23:58.523226] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:15.712 [2024-11-20 11:23:58.523237] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:15.712 [2024-11-20 11:23:58.523261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:15.712 [2024-11-20 11:23:58.539271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:15.712 spare 00:14:15.712 11:23:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.712 11:23:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:15.712 [2024-11-20 11:23:58.541600] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:16.652 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.652 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.652 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.652 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.652 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.652 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.652 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.652 11:23:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.652 11:23:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.652 11:23:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.652 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.652 "name": "raid_bdev1", 00:14:16.652 "uuid": "76d5709b-4e23-4a3c-9c21-4309362c5abc", 00:14:16.652 "strip_size_kb": 0, 00:14:16.652 "state": "online", 00:14:16.652 "raid_level": "raid1", 00:14:16.652 "superblock": true, 00:14:16.652 "num_base_bdevs": 2, 00:14:16.652 "num_base_bdevs_discovered": 2, 00:14:16.652 "num_base_bdevs_operational": 2, 00:14:16.652 "process": { 00:14:16.652 "type": "rebuild", 00:14:16.652 "target": "spare", 00:14:16.652 "progress": { 00:14:16.652 "blocks": 20480, 00:14:16.653 "percent": 32 00:14:16.653 } 00:14:16.653 }, 00:14:16.653 "base_bdevs_list": [ 00:14:16.653 { 00:14:16.653 "name": "spare", 00:14:16.653 "uuid": "48844476-60a9-581b-8d64-f305d0cd878f", 00:14:16.653 "is_configured": true, 00:14:16.653 "data_offset": 2048, 00:14:16.653 "data_size": 63488 00:14:16.653 }, 00:14:16.653 { 00:14:16.653 "name": "BaseBdev2", 00:14:16.653 "uuid": "3c21311e-fd53-587d-8ddc-fd13f15390f0", 00:14:16.653 "is_configured": true, 00:14:16.653 "data_offset": 2048, 00:14:16.653 "data_size": 63488 00:14:16.653 } 00:14:16.653 ] 00:14:16.653 }' 00:14:16.653 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.653 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.653 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.653 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.653 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:16.653 11:23:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.653 11:23:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.653 [2024-11-20 11:23:59.685388] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:16.653 [2024-11-20 11:23:59.747689] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:16.653 [2024-11-20 11:23:59.747782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.653 [2024-11-20 11:23:59.747819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:16.653 [2024-11-20 11:23:59.747828] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:16.912 11:23:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.912 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:16.912 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.912 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.912 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.912 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.912 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:16.912 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.912 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.912 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.912 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.912 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.912 11:23:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.912 11:23:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.912 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.912 11:23:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.912 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.913 "name": "raid_bdev1", 00:14:16.913 "uuid": "76d5709b-4e23-4a3c-9c21-4309362c5abc", 00:14:16.913 "strip_size_kb": 0, 00:14:16.913 "state": "online", 00:14:16.913 "raid_level": "raid1", 00:14:16.913 "superblock": true, 00:14:16.913 "num_base_bdevs": 2, 00:14:16.913 "num_base_bdevs_discovered": 1, 00:14:16.913 "num_base_bdevs_operational": 1, 00:14:16.913 "base_bdevs_list": [ 00:14:16.913 { 00:14:16.913 "name": null, 00:14:16.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.913 "is_configured": false, 00:14:16.913 "data_offset": 0, 00:14:16.913 "data_size": 63488 00:14:16.913 }, 00:14:16.913 { 00:14:16.913 "name": "BaseBdev2", 00:14:16.913 "uuid": "3c21311e-fd53-587d-8ddc-fd13f15390f0", 00:14:16.913 "is_configured": true, 00:14:16.913 "data_offset": 2048, 00:14:16.913 "data_size": 63488 00:14:16.913 } 00:14:16.913 ] 00:14:16.913 }' 00:14:16.913 11:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.913 11:23:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.173 11:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:17.173 11:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.173 11:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:17.173 11:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:17.173 11:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.173 11:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.173 11:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.173 11:24:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.173 11:24:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.173 11:24:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.173 11:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.173 "name": "raid_bdev1", 00:14:17.173 "uuid": "76d5709b-4e23-4a3c-9c21-4309362c5abc", 00:14:17.173 "strip_size_kb": 0, 00:14:17.173 "state": "online", 00:14:17.173 "raid_level": "raid1", 00:14:17.173 "superblock": true, 00:14:17.173 "num_base_bdevs": 2, 00:14:17.173 "num_base_bdevs_discovered": 1, 00:14:17.173 "num_base_bdevs_operational": 1, 00:14:17.173 "base_bdevs_list": [ 00:14:17.174 { 00:14:17.174 "name": null, 00:14:17.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.174 "is_configured": false, 00:14:17.174 "data_offset": 0, 00:14:17.174 "data_size": 63488 00:14:17.174 }, 00:14:17.174 { 00:14:17.174 "name": "BaseBdev2", 00:14:17.174 "uuid": "3c21311e-fd53-587d-8ddc-fd13f15390f0", 00:14:17.174 "is_configured": true, 00:14:17.174 "data_offset": 2048, 00:14:17.174 "data_size": 63488 00:14:17.174 } 00:14:17.174 ] 00:14:17.174 }' 00:14:17.174 11:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.434 11:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:17.434 11:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.434 11:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:17.434 11:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:17.434 11:24:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.434 11:24:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.434 11:24:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.434 11:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:17.434 11:24:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.434 11:24:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.434 [2024-11-20 11:24:00.385909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:17.434 [2024-11-20 11:24:00.385979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.434 [2024-11-20 11:24:00.386007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:17.434 [2024-11-20 11:24:00.386025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.434 [2024-11-20 11:24:00.386507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.434 [2024-11-20 11:24:00.386535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:17.434 [2024-11-20 11:24:00.386635] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:17.434 [2024-11-20 11:24:00.386655] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:17.434 [2024-11-20 11:24:00.386666] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:17.434 [2024-11-20 11:24:00.386678] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:17.434 BaseBdev1 00:14:17.434 11:24:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.434 11:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:18.373 11:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:18.373 11:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.373 11:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.373 11:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.373 11:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.373 11:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:18.373 11:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.373 11:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.373 11:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.373 11:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.373 11:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.373 11:24:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.374 11:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.374 11:24:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.374 11:24:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.374 11:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.374 "name": "raid_bdev1", 00:14:18.374 "uuid": "76d5709b-4e23-4a3c-9c21-4309362c5abc", 00:14:18.374 "strip_size_kb": 0, 00:14:18.374 "state": "online", 00:14:18.374 "raid_level": "raid1", 00:14:18.374 "superblock": true, 00:14:18.374 "num_base_bdevs": 2, 00:14:18.374 "num_base_bdevs_discovered": 1, 00:14:18.374 "num_base_bdevs_operational": 1, 00:14:18.374 "base_bdevs_list": [ 00:14:18.374 { 00:14:18.374 "name": null, 00:14:18.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.374 "is_configured": false, 00:14:18.374 "data_offset": 0, 00:14:18.374 "data_size": 63488 00:14:18.374 }, 00:14:18.374 { 00:14:18.374 "name": "BaseBdev2", 00:14:18.374 "uuid": "3c21311e-fd53-587d-8ddc-fd13f15390f0", 00:14:18.374 "is_configured": true, 00:14:18.374 "data_offset": 2048, 00:14:18.374 "data_size": 63488 00:14:18.374 } 00:14:18.374 ] 00:14:18.374 }' 00:14:18.374 11:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.374 11:24:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.948 11:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:18.948 11:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.948 11:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:18.948 11:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:18.948 11:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.948 11:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.948 11:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.948 11:24:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.948 11:24:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.948 11:24:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.948 11:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.948 "name": "raid_bdev1", 00:14:18.948 "uuid": "76d5709b-4e23-4a3c-9c21-4309362c5abc", 00:14:18.948 "strip_size_kb": 0, 00:14:18.948 "state": "online", 00:14:18.948 "raid_level": "raid1", 00:14:18.948 "superblock": true, 00:14:18.948 "num_base_bdevs": 2, 00:14:18.948 "num_base_bdevs_discovered": 1, 00:14:18.948 "num_base_bdevs_operational": 1, 00:14:18.948 "base_bdevs_list": [ 00:14:18.948 { 00:14:18.948 "name": null, 00:14:18.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.948 "is_configured": false, 00:14:18.948 "data_offset": 0, 00:14:18.948 "data_size": 63488 00:14:18.948 }, 00:14:18.948 { 00:14:18.948 "name": "BaseBdev2", 00:14:18.948 "uuid": "3c21311e-fd53-587d-8ddc-fd13f15390f0", 00:14:18.948 "is_configured": true, 00:14:18.948 "data_offset": 2048, 00:14:18.948 "data_size": 63488 00:14:18.948 } 00:14:18.948 ] 00:14:18.948 }' 00:14:18.948 11:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.948 11:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:18.948 11:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.948 11:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:18.948 11:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:18.948 11:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:18.948 11:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:18.948 11:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:18.948 11:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.948 11:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:18.948 11:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.948 11:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:18.948 11:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.948 11:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.948 [2024-11-20 11:24:02.039259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:18.948 [2024-11-20 11:24:02.039474] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:18.948 [2024-11-20 11:24:02.039503] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:18.948 request: 00:14:18.948 { 00:14:18.948 "base_bdev": "BaseBdev1", 00:14:18.948 "raid_bdev": "raid_bdev1", 00:14:18.948 "method": "bdev_raid_add_base_bdev", 00:14:18.948 "req_id": 1 00:14:18.948 } 00:14:18.949 Got JSON-RPC error response 00:14:18.949 response: 00:14:18.949 { 00:14:18.949 "code": -22, 00:14:18.949 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:18.949 } 00:14:18.949 11:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:18.949 11:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:18.949 11:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:18.949 11:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:18.949 11:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:18.949 11:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:20.347 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:20.347 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.347 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.347 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.347 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.347 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:20.347 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.347 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.347 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.347 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.347 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.347 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.347 11:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.347 11:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.347 11:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.347 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.347 "name": "raid_bdev1", 00:14:20.347 "uuid": "76d5709b-4e23-4a3c-9c21-4309362c5abc", 00:14:20.347 "strip_size_kb": 0, 00:14:20.347 "state": "online", 00:14:20.347 "raid_level": "raid1", 00:14:20.347 "superblock": true, 00:14:20.347 "num_base_bdevs": 2, 00:14:20.347 "num_base_bdevs_discovered": 1, 00:14:20.347 "num_base_bdevs_operational": 1, 00:14:20.347 "base_bdevs_list": [ 00:14:20.347 { 00:14:20.347 "name": null, 00:14:20.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.347 "is_configured": false, 00:14:20.347 "data_offset": 0, 00:14:20.347 "data_size": 63488 00:14:20.347 }, 00:14:20.347 { 00:14:20.347 "name": "BaseBdev2", 00:14:20.347 "uuid": "3c21311e-fd53-587d-8ddc-fd13f15390f0", 00:14:20.347 "is_configured": true, 00:14:20.347 "data_offset": 2048, 00:14:20.347 "data_size": 63488 00:14:20.347 } 00:14:20.347 ] 00:14:20.347 }' 00:14:20.347 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.347 11:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.613 "name": "raid_bdev1", 00:14:20.613 "uuid": "76d5709b-4e23-4a3c-9c21-4309362c5abc", 00:14:20.613 "strip_size_kb": 0, 00:14:20.613 "state": "online", 00:14:20.613 "raid_level": "raid1", 00:14:20.613 "superblock": true, 00:14:20.613 "num_base_bdevs": 2, 00:14:20.613 "num_base_bdevs_discovered": 1, 00:14:20.613 "num_base_bdevs_operational": 1, 00:14:20.613 "base_bdevs_list": [ 00:14:20.613 { 00:14:20.613 "name": null, 00:14:20.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.613 "is_configured": false, 00:14:20.613 "data_offset": 0, 00:14:20.613 "data_size": 63488 00:14:20.613 }, 00:14:20.613 { 00:14:20.613 "name": "BaseBdev2", 00:14:20.613 "uuid": "3c21311e-fd53-587d-8ddc-fd13f15390f0", 00:14:20.613 "is_configured": true, 00:14:20.613 "data_offset": 2048, 00:14:20.613 "data_size": 63488 00:14:20.613 } 00:14:20.613 ] 00:14:20.613 }' 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75883 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75883 ']' 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75883 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75883 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:20.613 killing process with pid 75883 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75883' 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75883 00:14:20.613 Received shutdown signal, test time was about 60.000000 seconds 00:14:20.613 00:14:20.613 Latency(us) 00:14:20.613 [2024-11-20T11:24:03.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.613 [2024-11-20T11:24:03.729Z] =================================================================================================================== 00:14:20.613 [2024-11-20T11:24:03.729Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:20.613 [2024-11-20 11:24:03.693196] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:20.613 [2024-11-20 11:24:03.693342] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:20.613 11:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75883 00:14:20.613 [2024-11-20 11:24:03.693405] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:20.613 [2024-11-20 11:24:03.693419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:21.193 [2024-11-20 11:24:04.007381] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:22.170 00:14:22.170 real 0m23.650s 00:14:22.170 user 0m28.850s 00:14:22.170 sys 0m3.697s 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.170 ************************************ 00:14:22.170 END TEST raid_rebuild_test_sb 00:14:22.170 ************************************ 00:14:22.170 11:24:05 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:14:22.170 11:24:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:22.170 11:24:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:22.170 11:24:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:22.170 ************************************ 00:14:22.170 START TEST raid_rebuild_test_io 00:14:22.170 ************************************ 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76620 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76620 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76620 ']' 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:22.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:22.170 11:24:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.434 [2024-11-20 11:24:05.310534] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:14:22.434 [2024-11-20 11:24:05.310653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76620 ] 00:14:22.434 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:22.434 Zero copy mechanism will not be used. 00:14:22.435 [2024-11-20 11:24:05.486306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.694 [2024-11-20 11:24:05.605145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.953 [2024-11-20 11:24:05.811157] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.953 [2024-11-20 11:24:05.811229] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:23.212 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.212 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:23.212 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:23.212 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:23.212 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.212 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.212 BaseBdev1_malloc 00:14:23.212 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.212 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:23.212 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.212 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.212 [2024-11-20 11:24:06.234467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:23.212 [2024-11-20 11:24:06.234557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.212 [2024-11-20 11:24:06.234584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:23.212 [2024-11-20 11:24:06.234596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.212 [2024-11-20 11:24:06.236909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.212 [2024-11-20 11:24:06.236952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:23.212 BaseBdev1 00:14:23.212 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.212 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:23.212 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:23.212 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.212 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.212 BaseBdev2_malloc 00:14:23.212 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.212 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:23.212 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.212 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.212 [2024-11-20 11:24:06.291204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:23.212 [2024-11-20 11:24:06.291283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.212 [2024-11-20 11:24:06.291305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:23.212 [2024-11-20 11:24:06.291315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.212 [2024-11-20 11:24:06.293649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.212 [2024-11-20 11:24:06.293690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:23.212 BaseBdev2 00:14:23.212 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.212 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:23.212 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.212 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.530 spare_malloc 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.530 spare_delay 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.530 [2024-11-20 11:24:06.371863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:23.530 [2024-11-20 11:24:06.371962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.530 [2024-11-20 11:24:06.371990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:23.530 [2024-11-20 11:24:06.372002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.530 [2024-11-20 11:24:06.374436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.530 [2024-11-20 11:24:06.374486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:23.530 spare 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.530 [2024-11-20 11:24:06.383850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.530 [2024-11-20 11:24:06.385653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:23.530 [2024-11-20 11:24:06.385743] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:23.530 [2024-11-20 11:24:06.385756] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:23.530 [2024-11-20 11:24:06.386002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:23.530 [2024-11-20 11:24:06.386184] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:23.530 [2024-11-20 11:24:06.386200] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:23.530 [2024-11-20 11:24:06.386356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.530 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.530 "name": "raid_bdev1", 00:14:23.531 "uuid": "bfce97e7-0617-48c2-a9f4-73fe91a3892d", 00:14:23.531 "strip_size_kb": 0, 00:14:23.531 "state": "online", 00:14:23.531 "raid_level": "raid1", 00:14:23.531 "superblock": false, 00:14:23.531 "num_base_bdevs": 2, 00:14:23.531 "num_base_bdevs_discovered": 2, 00:14:23.531 "num_base_bdevs_operational": 2, 00:14:23.531 "base_bdevs_list": [ 00:14:23.531 { 00:14:23.531 "name": "BaseBdev1", 00:14:23.531 "uuid": "aac1e42f-cf0c-5c62-aa51-2e61ec8a3106", 00:14:23.531 "is_configured": true, 00:14:23.531 "data_offset": 0, 00:14:23.531 "data_size": 65536 00:14:23.531 }, 00:14:23.531 { 00:14:23.531 "name": "BaseBdev2", 00:14:23.531 "uuid": "f0d4d780-272f-5f0d-9a07-2439c21b9837", 00:14:23.531 "is_configured": true, 00:14:23.531 "data_offset": 0, 00:14:23.531 "data_size": 65536 00:14:23.531 } 00:14:23.531 ] 00:14:23.531 }' 00:14:23.531 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.531 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.809 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:23.809 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:23.809 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.810 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.810 [2024-11-20 11:24:06.887307] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.810 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.810 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.069 [2024-11-20 11:24:06.978872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.069 11:24:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.069 11:24:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.069 11:24:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.069 "name": "raid_bdev1", 00:14:24.069 "uuid": "bfce97e7-0617-48c2-a9f4-73fe91a3892d", 00:14:24.069 "strip_size_kb": 0, 00:14:24.069 "state": "online", 00:14:24.069 "raid_level": "raid1", 00:14:24.069 "superblock": false, 00:14:24.069 "num_base_bdevs": 2, 00:14:24.069 "num_base_bdevs_discovered": 1, 00:14:24.069 "num_base_bdevs_operational": 1, 00:14:24.069 "base_bdevs_list": [ 00:14:24.069 { 00:14:24.069 "name": null, 00:14:24.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.069 "is_configured": false, 00:14:24.069 "data_offset": 0, 00:14:24.069 "data_size": 65536 00:14:24.069 }, 00:14:24.069 { 00:14:24.069 "name": "BaseBdev2", 00:14:24.069 "uuid": "f0d4d780-272f-5f0d-9a07-2439c21b9837", 00:14:24.069 "is_configured": true, 00:14:24.069 "data_offset": 0, 00:14:24.069 "data_size": 65536 00:14:24.069 } 00:14:24.069 ] 00:14:24.069 }' 00:14:24.069 11:24:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.069 11:24:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.069 [2024-11-20 11:24:07.086953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:24.069 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:24.070 Zero copy mechanism will not be used. 00:14:24.070 Running I/O for 60 seconds... 00:14:24.639 11:24:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:24.639 11:24:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.639 11:24:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.639 [2024-11-20 11:24:07.457224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:24.639 11:24:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.639 11:24:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:24.639 [2024-11-20 11:24:07.495132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:24.639 [2024-11-20 11:24:07.497177] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:24.639 [2024-11-20 11:24:07.610995] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:24.639 [2024-11-20 11:24:07.611646] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:24.898 [2024-11-20 11:24:07.821141] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:24.898 [2024-11-20 11:24:07.821530] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:25.417 135.00 IOPS, 405.00 MiB/s [2024-11-20T11:24:08.533Z] [2024-11-20 11:24:08.287673] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:25.417 [2024-11-20 11:24:08.288035] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:25.417 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.417 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.417 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.417 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.417 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.417 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.417 11:24:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.417 11:24:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.417 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.417 11:24:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.684 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.684 "name": "raid_bdev1", 00:14:25.684 "uuid": "bfce97e7-0617-48c2-a9f4-73fe91a3892d", 00:14:25.684 "strip_size_kb": 0, 00:14:25.684 "state": "online", 00:14:25.684 "raid_level": "raid1", 00:14:25.684 "superblock": false, 00:14:25.684 "num_base_bdevs": 2, 00:14:25.684 "num_base_bdevs_discovered": 2, 00:14:25.684 "num_base_bdevs_operational": 2, 00:14:25.684 "process": { 00:14:25.684 "type": "rebuild", 00:14:25.684 "target": "spare", 00:14:25.684 "progress": { 00:14:25.684 "blocks": 10240, 00:14:25.684 "percent": 15 00:14:25.684 } 00:14:25.684 }, 00:14:25.684 "base_bdevs_list": [ 00:14:25.684 { 00:14:25.684 "name": "spare", 00:14:25.684 "uuid": "dca91c8f-034c-58d3-b15a-5a52ad03cd4e", 00:14:25.684 "is_configured": true, 00:14:25.684 "data_offset": 0, 00:14:25.684 "data_size": 65536 00:14:25.684 }, 00:14:25.684 { 00:14:25.684 "name": "BaseBdev2", 00:14:25.684 "uuid": "f0d4d780-272f-5f0d-9a07-2439c21b9837", 00:14:25.684 "is_configured": true, 00:14:25.684 "data_offset": 0, 00:14:25.684 "data_size": 65536 00:14:25.684 } 00:14:25.684 ] 00:14:25.684 }' 00:14:25.684 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.684 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:25.684 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.684 [2024-11-20 11:24:08.618575] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:25.684 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.684 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:25.684 11:24:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.684 11:24:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.684 [2024-11-20 11:24:08.645963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:25.684 [2024-11-20 11:24:08.745719] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:25.943 [2024-11-20 11:24:08.852923] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:25.943 [2024-11-20 11:24:08.861540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.943 [2024-11-20 11:24:08.861587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:25.943 [2024-11-20 11:24:08.861604] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:25.943 [2024-11-20 11:24:08.906624] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:25.943 11:24:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.943 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:25.943 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.943 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.943 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.943 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.943 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:25.943 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.943 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.943 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.943 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.944 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.944 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.944 11:24:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.944 11:24:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.944 11:24:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.944 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.944 "name": "raid_bdev1", 00:14:25.944 "uuid": "bfce97e7-0617-48c2-a9f4-73fe91a3892d", 00:14:25.944 "strip_size_kb": 0, 00:14:25.944 "state": "online", 00:14:25.944 "raid_level": "raid1", 00:14:25.944 "superblock": false, 00:14:25.944 "num_base_bdevs": 2, 00:14:25.944 "num_base_bdevs_discovered": 1, 00:14:25.944 "num_base_bdevs_operational": 1, 00:14:25.944 "base_bdevs_list": [ 00:14:25.944 { 00:14:25.944 "name": null, 00:14:25.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.944 "is_configured": false, 00:14:25.944 "data_offset": 0, 00:14:25.944 "data_size": 65536 00:14:25.944 }, 00:14:25.944 { 00:14:25.944 "name": "BaseBdev2", 00:14:25.944 "uuid": "f0d4d780-272f-5f0d-9a07-2439c21b9837", 00:14:25.944 "is_configured": true, 00:14:25.944 "data_offset": 0, 00:14:25.944 "data_size": 65536 00:14:25.944 } 00:14:25.944 ] 00:14:25.944 }' 00:14:25.944 11:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.944 11:24:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.465 123.50 IOPS, 370.50 MiB/s [2024-11-20T11:24:09.581Z] 11:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:26.465 11:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.465 11:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:26.465 11:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:26.465 11:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.465 11:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.465 11:24:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.465 11:24:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.465 11:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.465 11:24:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.465 11:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.465 "name": "raid_bdev1", 00:14:26.465 "uuid": "bfce97e7-0617-48c2-a9f4-73fe91a3892d", 00:14:26.465 "strip_size_kb": 0, 00:14:26.465 "state": "online", 00:14:26.465 "raid_level": "raid1", 00:14:26.465 "superblock": false, 00:14:26.465 "num_base_bdevs": 2, 00:14:26.465 "num_base_bdevs_discovered": 1, 00:14:26.465 "num_base_bdevs_operational": 1, 00:14:26.465 "base_bdevs_list": [ 00:14:26.465 { 00:14:26.465 "name": null, 00:14:26.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.465 "is_configured": false, 00:14:26.465 "data_offset": 0, 00:14:26.465 "data_size": 65536 00:14:26.465 }, 00:14:26.465 { 00:14:26.465 "name": "BaseBdev2", 00:14:26.465 "uuid": "f0d4d780-272f-5f0d-9a07-2439c21b9837", 00:14:26.465 "is_configured": true, 00:14:26.465 "data_offset": 0, 00:14:26.465 "data_size": 65536 00:14:26.465 } 00:14:26.465 ] 00:14:26.465 }' 00:14:26.465 11:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.465 11:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:26.466 11:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.466 11:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:26.466 11:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:26.466 11:24:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.466 11:24:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.466 [2024-11-20 11:24:09.532512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.466 11:24:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.466 11:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:26.726 [2024-11-20 11:24:09.595969] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:26.726 [2024-11-20 11:24:09.598059] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:26.726 [2024-11-20 11:24:09.712900] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:26.726 [2024-11-20 11:24:09.713540] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:26.985 [2024-11-20 11:24:09.927322] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:26.985 [2024-11-20 11:24:09.927718] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:27.244 137.67 IOPS, 413.00 MiB/s [2024-11-20T11:24:10.360Z] [2024-11-20 11:24:10.262249] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:27.525 [2024-11-20 11:24:10.503984] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:27.525 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.525 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.525 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.525 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.525 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.525 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.525 11:24:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.525 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.525 11:24:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.525 11:24:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.525 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.525 "name": "raid_bdev1", 00:14:27.525 "uuid": "bfce97e7-0617-48c2-a9f4-73fe91a3892d", 00:14:27.525 "strip_size_kb": 0, 00:14:27.525 "state": "online", 00:14:27.525 "raid_level": "raid1", 00:14:27.525 "superblock": false, 00:14:27.525 "num_base_bdevs": 2, 00:14:27.525 "num_base_bdevs_discovered": 2, 00:14:27.525 "num_base_bdevs_operational": 2, 00:14:27.525 "process": { 00:14:27.525 "type": "rebuild", 00:14:27.525 "target": "spare", 00:14:27.525 "progress": { 00:14:27.525 "blocks": 10240, 00:14:27.525 "percent": 15 00:14:27.525 } 00:14:27.525 }, 00:14:27.525 "base_bdevs_list": [ 00:14:27.525 { 00:14:27.525 "name": "spare", 00:14:27.525 "uuid": "dca91c8f-034c-58d3-b15a-5a52ad03cd4e", 00:14:27.525 "is_configured": true, 00:14:27.525 "data_offset": 0, 00:14:27.525 "data_size": 65536 00:14:27.525 }, 00:14:27.525 { 00:14:27.525 "name": "BaseBdev2", 00:14:27.525 "uuid": "f0d4d780-272f-5f0d-9a07-2439c21b9837", 00:14:27.525 "is_configured": true, 00:14:27.525 "data_offset": 0, 00:14:27.525 "data_size": 65536 00:14:27.525 } 00:14:27.525 ] 00:14:27.525 }' 00:14:27.525 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.786 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.786 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.786 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.786 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:27.786 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:27.786 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:27.786 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:27.786 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=416 00:14:27.786 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:27.786 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.786 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.786 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.786 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.786 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.786 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.786 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.786 11:24:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.786 11:24:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.786 11:24:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.786 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.786 "name": "raid_bdev1", 00:14:27.786 "uuid": "bfce97e7-0617-48c2-a9f4-73fe91a3892d", 00:14:27.786 "strip_size_kb": 0, 00:14:27.786 "state": "online", 00:14:27.786 "raid_level": "raid1", 00:14:27.786 "superblock": false, 00:14:27.786 "num_base_bdevs": 2, 00:14:27.786 "num_base_bdevs_discovered": 2, 00:14:27.786 "num_base_bdevs_operational": 2, 00:14:27.786 "process": { 00:14:27.786 "type": "rebuild", 00:14:27.786 "target": "spare", 00:14:27.786 "progress": { 00:14:27.786 "blocks": 10240, 00:14:27.786 "percent": 15 00:14:27.786 } 00:14:27.786 }, 00:14:27.786 "base_bdevs_list": [ 00:14:27.786 { 00:14:27.786 "name": "spare", 00:14:27.786 "uuid": "dca91c8f-034c-58d3-b15a-5a52ad03cd4e", 00:14:27.786 "is_configured": true, 00:14:27.786 "data_offset": 0, 00:14:27.786 "data_size": 65536 00:14:27.786 }, 00:14:27.786 { 00:14:27.786 "name": "BaseBdev2", 00:14:27.786 "uuid": "f0d4d780-272f-5f0d-9a07-2439c21b9837", 00:14:27.786 "is_configured": true, 00:14:27.786 "data_offset": 0, 00:14:27.786 "data_size": 65536 00:14:27.786 } 00:14:27.786 ] 00:14:27.786 }' 00:14:27.786 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.786 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.786 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.786 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.786 11:24:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:28.045 [2024-11-20 11:24:10.953998] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:28.045 [2024-11-20 11:24:10.954353] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:28.304 126.75 IOPS, 380.25 MiB/s [2024-11-20T11:24:11.420Z] [2024-11-20 11:24:11.176185] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:28.304 [2024-11-20 11:24:11.176825] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:28.304 [2024-11-20 11:24:11.386539] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:28.564 [2024-11-20 11:24:11.633761] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:28.824 [2024-11-20 11:24:11.753537] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:28.824 11:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:28.824 11:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.824 11:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.824 11:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.824 11:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.824 11:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.824 11:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.824 11:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.824 11:24:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.824 11:24:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.824 11:24:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.824 11:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.824 "name": "raid_bdev1", 00:14:28.824 "uuid": "bfce97e7-0617-48c2-a9f4-73fe91a3892d", 00:14:28.824 "strip_size_kb": 0, 00:14:28.824 "state": "online", 00:14:28.824 "raid_level": "raid1", 00:14:28.824 "superblock": false, 00:14:28.824 "num_base_bdevs": 2, 00:14:28.824 "num_base_bdevs_discovered": 2, 00:14:28.824 "num_base_bdevs_operational": 2, 00:14:28.824 "process": { 00:14:28.824 "type": "rebuild", 00:14:28.824 "target": "spare", 00:14:28.824 "progress": { 00:14:28.824 "blocks": 28672, 00:14:28.824 "percent": 43 00:14:28.824 } 00:14:28.824 }, 00:14:28.824 "base_bdevs_list": [ 00:14:28.824 { 00:14:28.824 "name": "spare", 00:14:28.824 "uuid": "dca91c8f-034c-58d3-b15a-5a52ad03cd4e", 00:14:28.824 "is_configured": true, 00:14:28.824 "data_offset": 0, 00:14:28.824 "data_size": 65536 00:14:28.824 }, 00:14:28.824 { 00:14:28.824 "name": "BaseBdev2", 00:14:28.824 "uuid": "f0d4d780-272f-5f0d-9a07-2439c21b9837", 00:14:28.824 "is_configured": true, 00:14:28.824 "data_offset": 0, 00:14:28.824 "data_size": 65536 00:14:28.824 } 00:14:28.824 ] 00:14:28.824 }' 00:14:28.824 11:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.824 11:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.824 11:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.083 11:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.083 11:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:29.342 112.00 IOPS, 336.00 MiB/s [2024-11-20T11:24:12.458Z] [2024-11-20 11:24:12.205926] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:29.342 [2024-11-20 11:24:12.439161] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:29.602 [2024-11-20 11:24:12.663606] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:29.862 11:24:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:29.862 11:24:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.862 11:24:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.862 11:24:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.862 11:24:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.862 11:24:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.120 11:24:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.120 11:24:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.120 11:24:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.120 11:24:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.120 11:24:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.120 11:24:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.120 "name": "raid_bdev1", 00:14:30.120 "uuid": "bfce97e7-0617-48c2-a9f4-73fe91a3892d", 00:14:30.120 "strip_size_kb": 0, 00:14:30.120 "state": "online", 00:14:30.120 "raid_level": "raid1", 00:14:30.120 "superblock": false, 00:14:30.120 "num_base_bdevs": 2, 00:14:30.120 "num_base_bdevs_discovered": 2, 00:14:30.120 "num_base_bdevs_operational": 2, 00:14:30.120 "process": { 00:14:30.120 "type": "rebuild", 00:14:30.120 "target": "spare", 00:14:30.120 "progress": { 00:14:30.120 "blocks": 43008, 00:14:30.120 "percent": 65 00:14:30.120 } 00:14:30.120 }, 00:14:30.120 "base_bdevs_list": [ 00:14:30.120 { 00:14:30.120 "name": "spare", 00:14:30.120 "uuid": "dca91c8f-034c-58d3-b15a-5a52ad03cd4e", 00:14:30.120 "is_configured": true, 00:14:30.120 "data_offset": 0, 00:14:30.120 "data_size": 65536 00:14:30.120 }, 00:14:30.120 { 00:14:30.120 "name": "BaseBdev2", 00:14:30.120 "uuid": "f0d4d780-272f-5f0d-9a07-2439c21b9837", 00:14:30.120 "is_configured": true, 00:14:30.120 "data_offset": 0, 00:14:30.120 "data_size": 65536 00:14:30.120 } 00:14:30.120 ] 00:14:30.120 }' 00:14:30.121 11:24:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.121 11:24:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:30.121 11:24:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.121 99.67 IOPS, 299.00 MiB/s [2024-11-20T11:24:13.237Z] 11:24:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:30.121 11:24:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:30.387 [2024-11-20 11:24:13.312313] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:30.648 [2024-11-20 11:24:13.642485] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:30.648 [2024-11-20 11:24:13.643092] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:30.909 [2024-11-20 11:24:13.850307] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:30.909 [2024-11-20 11:24:13.850678] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:31.169 90.71 IOPS, 272.14 MiB/s [2024-11-20T11:24:14.285Z] 11:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:31.169 11:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.169 11:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.169 11:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.169 11:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.169 11:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.169 11:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.169 11:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.169 11:24:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.169 11:24:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.169 11:24:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.169 11:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.169 "name": "raid_bdev1", 00:14:31.169 "uuid": "bfce97e7-0617-48c2-a9f4-73fe91a3892d", 00:14:31.169 "strip_size_kb": 0, 00:14:31.169 "state": "online", 00:14:31.169 "raid_level": "raid1", 00:14:31.169 "superblock": false, 00:14:31.169 "num_base_bdevs": 2, 00:14:31.169 "num_base_bdevs_discovered": 2, 00:14:31.169 "num_base_bdevs_operational": 2, 00:14:31.169 "process": { 00:14:31.169 "type": "rebuild", 00:14:31.169 "target": "spare", 00:14:31.169 "progress": { 00:14:31.169 "blocks": 61440, 00:14:31.169 "percent": 93 00:14:31.169 } 00:14:31.169 }, 00:14:31.169 "base_bdevs_list": [ 00:14:31.169 { 00:14:31.169 "name": "spare", 00:14:31.169 "uuid": "dca91c8f-034c-58d3-b15a-5a52ad03cd4e", 00:14:31.169 "is_configured": true, 00:14:31.169 "data_offset": 0, 00:14:31.169 "data_size": 65536 00:14:31.169 }, 00:14:31.169 { 00:14:31.169 "name": "BaseBdev2", 00:14:31.169 "uuid": "f0d4d780-272f-5f0d-9a07-2439c21b9837", 00:14:31.169 "is_configured": true, 00:14:31.169 "data_offset": 0, 00:14:31.169 "data_size": 65536 00:14:31.169 } 00:14:31.169 ] 00:14:31.169 }' 00:14:31.169 11:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.169 11:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.169 11:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.169 11:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.169 11:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:31.428 [2024-11-20 11:24:14.283972] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:31.428 [2024-11-20 11:24:14.390225] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:31.428 [2024-11-20 11:24:14.392754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.257 83.25 IOPS, 249.75 MiB/s [2024-11-20T11:24:15.373Z] 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:32.257 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:32.257 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.257 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:32.257 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:32.257 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.257 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.257 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.257 11:24:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.257 11:24:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.257 11:24:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.257 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.257 "name": "raid_bdev1", 00:14:32.257 "uuid": "bfce97e7-0617-48c2-a9f4-73fe91a3892d", 00:14:32.257 "strip_size_kb": 0, 00:14:32.257 "state": "online", 00:14:32.257 "raid_level": "raid1", 00:14:32.257 "superblock": false, 00:14:32.257 "num_base_bdevs": 2, 00:14:32.257 "num_base_bdevs_discovered": 2, 00:14:32.257 "num_base_bdevs_operational": 2, 00:14:32.257 "base_bdevs_list": [ 00:14:32.257 { 00:14:32.257 "name": "spare", 00:14:32.257 "uuid": "dca91c8f-034c-58d3-b15a-5a52ad03cd4e", 00:14:32.257 "is_configured": true, 00:14:32.257 "data_offset": 0, 00:14:32.257 "data_size": 65536 00:14:32.257 }, 00:14:32.257 { 00:14:32.257 "name": "BaseBdev2", 00:14:32.257 "uuid": "f0d4d780-272f-5f0d-9a07-2439c21b9837", 00:14:32.257 "is_configured": true, 00:14:32.257 "data_offset": 0, 00:14:32.257 "data_size": 65536 00:14:32.257 } 00:14:32.257 ] 00:14:32.257 }' 00:14:32.257 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.257 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:32.257 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.516 "name": "raid_bdev1", 00:14:32.516 "uuid": "bfce97e7-0617-48c2-a9f4-73fe91a3892d", 00:14:32.516 "strip_size_kb": 0, 00:14:32.516 "state": "online", 00:14:32.516 "raid_level": "raid1", 00:14:32.516 "superblock": false, 00:14:32.516 "num_base_bdevs": 2, 00:14:32.516 "num_base_bdevs_discovered": 2, 00:14:32.516 "num_base_bdevs_operational": 2, 00:14:32.516 "base_bdevs_list": [ 00:14:32.516 { 00:14:32.516 "name": "spare", 00:14:32.516 "uuid": "dca91c8f-034c-58d3-b15a-5a52ad03cd4e", 00:14:32.516 "is_configured": true, 00:14:32.516 "data_offset": 0, 00:14:32.516 "data_size": 65536 00:14:32.516 }, 00:14:32.516 { 00:14:32.516 "name": "BaseBdev2", 00:14:32.516 "uuid": "f0d4d780-272f-5f0d-9a07-2439c21b9837", 00:14:32.516 "is_configured": true, 00:14:32.516 "data_offset": 0, 00:14:32.516 "data_size": 65536 00:14:32.516 } 00:14:32.516 ] 00:14:32.516 }' 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.516 "name": "raid_bdev1", 00:14:32.516 "uuid": "bfce97e7-0617-48c2-a9f4-73fe91a3892d", 00:14:32.516 "strip_size_kb": 0, 00:14:32.516 "state": "online", 00:14:32.516 "raid_level": "raid1", 00:14:32.516 "superblock": false, 00:14:32.516 "num_base_bdevs": 2, 00:14:32.516 "num_base_bdevs_discovered": 2, 00:14:32.516 "num_base_bdevs_operational": 2, 00:14:32.516 "base_bdevs_list": [ 00:14:32.516 { 00:14:32.516 "name": "spare", 00:14:32.516 "uuid": "dca91c8f-034c-58d3-b15a-5a52ad03cd4e", 00:14:32.516 "is_configured": true, 00:14:32.516 "data_offset": 0, 00:14:32.516 "data_size": 65536 00:14:32.516 }, 00:14:32.516 { 00:14:32.516 "name": "BaseBdev2", 00:14:32.516 "uuid": "f0d4d780-272f-5f0d-9a07-2439c21b9837", 00:14:32.516 "is_configured": true, 00:14:32.516 "data_offset": 0, 00:14:32.516 "data_size": 65536 00:14:32.516 } 00:14:32.516 ] 00:14:32.516 }' 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.516 11:24:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.103 11:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:33.103 11:24:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.103 11:24:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.103 [2024-11-20 11:24:15.921939] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:33.103 [2024-11-20 11:24:15.921977] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:33.103 00:14:33.103 Latency(us) 00:14:33.103 [2024-11-20T11:24:16.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.103 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:33.103 raid_bdev1 : 8.94 78.73 236.19 0.00 0.00 17666.70 313.01 116304.94 00:14:33.103 [2024-11-20T11:24:16.219Z] =================================================================================================================== 00:14:33.103 [2024-11-20T11:24:16.219Z] Total : 78.73 236.19 0.00 0.00 17666.70 313.01 116304.94 00:14:33.103 [2024-11-20 11:24:16.038084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.103 [2024-11-20 11:24:16.038154] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:33.103 [2024-11-20 11:24:16.038256] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:33.103 [2024-11-20 11:24:16.038293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:33.103 { 00:14:33.103 "results": [ 00:14:33.103 { 00:14:33.103 "job": "raid_bdev1", 00:14:33.103 "core_mask": "0x1", 00:14:33.103 "workload": "randrw", 00:14:33.103 "percentage": 50, 00:14:33.103 "status": "finished", 00:14:33.103 "queue_depth": 2, 00:14:33.103 "io_size": 3145728, 00:14:33.103 "runtime": 8.941942, 00:14:33.104 "iops": 78.73010135829554, 00:14:33.104 "mibps": 236.19030407488663, 00:14:33.104 "io_failed": 0, 00:14:33.104 "io_timeout": 0, 00:14:33.104 "avg_latency_us": 17666.70329495832, 00:14:33.104 "min_latency_us": 313.0131004366812, 00:14:33.104 "max_latency_us": 116304.93624454149 00:14:33.104 } 00:14:33.104 ], 00:14:33.104 "core_count": 1 00:14:33.104 } 00:14:33.104 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.104 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.104 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.104 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.104 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:33.104 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.104 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:33.104 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:33.104 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:33.104 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:33.104 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:33.104 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:33.104 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:33.104 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:33.104 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:33.104 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:33.104 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:33.104 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:33.104 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:33.383 /dev/nbd0 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:33.383 1+0 records in 00:14:33.383 1+0 records out 00:14:33.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296884 s, 13.8 MB/s 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:33.383 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:33.641 /dev/nbd1 00:14:33.641 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:33.641 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:33.641 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:33.641 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:33.641 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:33.641 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:33.641 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:33.641 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:33.641 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:33.641 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:33.641 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:33.641 1+0 records in 00:14:33.641 1+0 records out 00:14:33.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440272 s, 9.3 MB/s 00:14:33.641 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.641 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:33.641 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.641 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:33.641 11:24:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:33.641 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:33.641 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:33.641 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:33.900 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:33.900 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:33.900 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:33.900 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:33.900 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:33.900 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:33.900 11:24:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:33.900 11:24:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76620 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76620 ']' 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76620 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:34.158 11:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76620 00:14:34.417 11:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:34.417 11:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:34.417 killing process with pid 76620 00:14:34.417 11:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76620' 00:14:34.417 11:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76620 00:14:34.417 Received shutdown signal, test time was about 10.217166 seconds 00:14:34.417 00:14:34.417 Latency(us) 00:14:34.417 [2024-11-20T11:24:17.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.417 [2024-11-20T11:24:17.533Z] =================================================================================================================== 00:14:34.417 [2024-11-20T11:24:17.533Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:34.417 [2024-11-20 11:24:17.286505] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:34.417 11:24:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76620 00:14:34.674 [2024-11-20 11:24:17.549519] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:36.047 00:14:36.047 real 0m13.610s 00:14:36.047 user 0m16.955s 00:14:36.047 sys 0m1.498s 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.047 ************************************ 00:14:36.047 END TEST raid_rebuild_test_io 00:14:36.047 ************************************ 00:14:36.047 11:24:18 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:14:36.047 11:24:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:36.047 11:24:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:36.047 11:24:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:36.047 ************************************ 00:14:36.047 START TEST raid_rebuild_test_sb_io 00:14:36.047 ************************************ 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:36.047 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:36.048 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:36.048 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77015 00:14:36.048 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77015 00:14:36.048 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:36.048 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77015 ']' 00:14:36.048 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.048 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:36.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.048 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.048 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:36.048 11:24:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.048 [2024-11-20 11:24:18.990537] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:14:36.048 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:36.048 Zero copy mechanism will not be used. 00:14:36.048 [2024-11-20 11:24:18.990658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77015 ] 00:14:36.048 [2024-11-20 11:24:19.144856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.311 [2024-11-20 11:24:19.262910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.579 [2024-11-20 11:24:19.476050] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.579 [2024-11-20 11:24:19.476130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.838 11:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:36.838 11:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:36.838 11:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.838 11:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:36.839 11:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.839 11:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.839 BaseBdev1_malloc 00:14:36.839 11:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.839 11:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:36.839 11:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.839 11:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.839 [2024-11-20 11:24:19.888333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:36.839 [2024-11-20 11:24:19.888418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.839 [2024-11-20 11:24:19.888444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:36.839 [2024-11-20 11:24:19.888473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.839 [2024-11-20 11:24:19.890809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.839 [2024-11-20 11:24:19.890855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:36.839 BaseBdev1 00:14:36.839 11:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.839 11:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.839 11:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:36.839 11:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.839 11:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.839 BaseBdev2_malloc 00:14:36.839 11:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.839 11:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:36.839 11:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.839 11:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.839 [2024-11-20 11:24:19.946407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:36.839 [2024-11-20 11:24:19.946489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.839 [2024-11-20 11:24:19.946509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:36.839 [2024-11-20 11:24:19.946523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.839 [2024-11-20 11:24:19.948789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.839 [2024-11-20 11:24:19.948829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:36.839 BaseBdev2 00:14:36.839 11:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.839 11:24:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:36.839 11:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.839 11:24:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.099 spare_malloc 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.099 spare_delay 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.099 [2024-11-20 11:24:20.017778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:37.099 [2024-11-20 11:24:20.017845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.099 [2024-11-20 11:24:20.017866] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:37.099 [2024-11-20 11:24:20.017878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.099 [2024-11-20 11:24:20.020164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.099 [2024-11-20 11:24:20.020211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:37.099 spare 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.099 [2024-11-20 11:24:20.025829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.099 [2024-11-20 11:24:20.027812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:37.099 [2024-11-20 11:24:20.028007] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:37.099 [2024-11-20 11:24:20.028034] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:37.099 [2024-11-20 11:24:20.028296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:37.099 [2024-11-20 11:24:20.028508] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:37.099 [2024-11-20 11:24:20.028523] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:37.099 [2024-11-20 11:24:20.028695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.099 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.100 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.100 "name": "raid_bdev1", 00:14:37.100 "uuid": "287f4541-468e-481c-abf6-f8648cfdefdb", 00:14:37.100 "strip_size_kb": 0, 00:14:37.100 "state": "online", 00:14:37.100 "raid_level": "raid1", 00:14:37.100 "superblock": true, 00:14:37.100 "num_base_bdevs": 2, 00:14:37.100 "num_base_bdevs_discovered": 2, 00:14:37.100 "num_base_bdevs_operational": 2, 00:14:37.100 "base_bdevs_list": [ 00:14:37.100 { 00:14:37.100 "name": "BaseBdev1", 00:14:37.100 "uuid": "e807db65-6f60-5e52-9c1f-a7f7b5041464", 00:14:37.100 "is_configured": true, 00:14:37.100 "data_offset": 2048, 00:14:37.100 "data_size": 63488 00:14:37.100 }, 00:14:37.100 { 00:14:37.100 "name": "BaseBdev2", 00:14:37.100 "uuid": "bc92ac57-b752-54b3-b869-bcc340a27666", 00:14:37.100 "is_configured": true, 00:14:37.100 "data_offset": 2048, 00:14:37.100 "data_size": 63488 00:14:37.100 } 00:14:37.100 ] 00:14:37.100 }' 00:14:37.100 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.100 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.670 [2024-11-20 11:24:20.481383] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.670 [2024-11-20 11:24:20.580873] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.670 "name": "raid_bdev1", 00:14:37.670 "uuid": "287f4541-468e-481c-abf6-f8648cfdefdb", 00:14:37.670 "strip_size_kb": 0, 00:14:37.670 "state": "online", 00:14:37.670 "raid_level": "raid1", 00:14:37.670 "superblock": true, 00:14:37.670 "num_base_bdevs": 2, 00:14:37.670 "num_base_bdevs_discovered": 1, 00:14:37.670 "num_base_bdevs_operational": 1, 00:14:37.670 "base_bdevs_list": [ 00:14:37.670 { 00:14:37.670 "name": null, 00:14:37.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.670 "is_configured": false, 00:14:37.670 "data_offset": 0, 00:14:37.670 "data_size": 63488 00:14:37.670 }, 00:14:37.670 { 00:14:37.670 "name": "BaseBdev2", 00:14:37.670 "uuid": "bc92ac57-b752-54b3-b869-bcc340a27666", 00:14:37.670 "is_configured": true, 00:14:37.670 "data_offset": 2048, 00:14:37.670 "data_size": 63488 00:14:37.670 } 00:14:37.670 ] 00:14:37.670 }' 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.670 11:24:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.670 [2024-11-20 11:24:20.669380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:37.670 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:37.670 Zero copy mechanism will not be used. 00:14:37.670 Running I/O for 60 seconds... 00:14:38.239 11:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:38.239 11:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.239 11:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.239 [2024-11-20 11:24:21.065109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:38.239 11:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.239 11:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:38.239 [2024-11-20 11:24:21.114805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:38.239 [2024-11-20 11:24:21.116935] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:38.239 [2024-11-20 11:24:21.233445] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:38.239 [2024-11-20 11:24:21.234078] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:38.498 [2024-11-20 11:24:21.450277] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:38.498 [2024-11-20 11:24:21.450663] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:38.757 141.00 IOPS, 423.00 MiB/s [2024-11-20T11:24:21.874Z] [2024-11-20 11:24:21.791217] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:38.758 [2024-11-20 11:24:21.791888] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:39.017 [2024-11-20 11:24:22.006660] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:39.017 [2024-11-20 11:24:22.007022] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:39.017 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.017 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.017 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.017 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.017 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.017 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.017 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.017 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.017 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.276 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.276 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.276 "name": "raid_bdev1", 00:14:39.276 "uuid": "287f4541-468e-481c-abf6-f8648cfdefdb", 00:14:39.276 "strip_size_kb": 0, 00:14:39.276 "state": "online", 00:14:39.276 "raid_level": "raid1", 00:14:39.276 "superblock": true, 00:14:39.276 "num_base_bdevs": 2, 00:14:39.276 "num_base_bdevs_discovered": 2, 00:14:39.276 "num_base_bdevs_operational": 2, 00:14:39.276 "process": { 00:14:39.276 "type": "rebuild", 00:14:39.276 "target": "spare", 00:14:39.276 "progress": { 00:14:39.276 "blocks": 10240, 00:14:39.276 "percent": 16 00:14:39.276 } 00:14:39.276 }, 00:14:39.276 "base_bdevs_list": [ 00:14:39.276 { 00:14:39.276 "name": "spare", 00:14:39.276 "uuid": "35aa1a7d-8432-560f-ad9a-3b356c9167ca", 00:14:39.276 "is_configured": true, 00:14:39.276 "data_offset": 2048, 00:14:39.276 "data_size": 63488 00:14:39.276 }, 00:14:39.276 { 00:14:39.276 "name": "BaseBdev2", 00:14:39.276 "uuid": "bc92ac57-b752-54b3-b869-bcc340a27666", 00:14:39.276 "is_configured": true, 00:14:39.276 "data_offset": 2048, 00:14:39.276 "data_size": 63488 00:14:39.276 } 00:14:39.276 ] 00:14:39.276 }' 00:14:39.276 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.276 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.276 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.276 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.276 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:39.276 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.276 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.276 [2024-11-20 11:24:22.251799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:39.276 [2024-11-20 11:24:22.351419] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:39.535 [2024-11-20 11:24:22.452589] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:39.535 [2024-11-20 11:24:22.468205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.535 [2024-11-20 11:24:22.468271] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:39.535 [2024-11-20 11:24:22.468302] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:39.535 [2024-11-20 11:24:22.512255] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:39.535 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.535 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:39.535 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.535 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.535 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.535 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.535 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:39.535 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.535 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.535 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.535 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.535 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.535 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.535 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.535 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.535 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.535 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.535 "name": "raid_bdev1", 00:14:39.535 "uuid": "287f4541-468e-481c-abf6-f8648cfdefdb", 00:14:39.535 "strip_size_kb": 0, 00:14:39.535 "state": "online", 00:14:39.535 "raid_level": "raid1", 00:14:39.535 "superblock": true, 00:14:39.535 "num_base_bdevs": 2, 00:14:39.535 "num_base_bdevs_discovered": 1, 00:14:39.535 "num_base_bdevs_operational": 1, 00:14:39.535 "base_bdevs_list": [ 00:14:39.535 { 00:14:39.535 "name": null, 00:14:39.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.535 "is_configured": false, 00:14:39.535 "data_offset": 0, 00:14:39.535 "data_size": 63488 00:14:39.535 }, 00:14:39.535 { 00:14:39.535 "name": "BaseBdev2", 00:14:39.535 "uuid": "bc92ac57-b752-54b3-b869-bcc340a27666", 00:14:39.535 "is_configured": true, 00:14:39.535 "data_offset": 2048, 00:14:39.535 "data_size": 63488 00:14:39.535 } 00:14:39.535 ] 00:14:39.535 }' 00:14:39.535 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.535 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.062 111.50 IOPS, 334.50 MiB/s [2024-11-20T11:24:23.178Z] 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:40.062 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.062 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:40.062 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:40.062 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.062 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.062 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.063 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.063 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.063 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.063 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.063 "name": "raid_bdev1", 00:14:40.063 "uuid": "287f4541-468e-481c-abf6-f8648cfdefdb", 00:14:40.063 "strip_size_kb": 0, 00:14:40.063 "state": "online", 00:14:40.063 "raid_level": "raid1", 00:14:40.063 "superblock": true, 00:14:40.063 "num_base_bdevs": 2, 00:14:40.063 "num_base_bdevs_discovered": 1, 00:14:40.063 "num_base_bdevs_operational": 1, 00:14:40.063 "base_bdevs_list": [ 00:14:40.063 { 00:14:40.063 "name": null, 00:14:40.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.063 "is_configured": false, 00:14:40.063 "data_offset": 0, 00:14:40.063 "data_size": 63488 00:14:40.063 }, 00:14:40.063 { 00:14:40.063 "name": "BaseBdev2", 00:14:40.063 "uuid": "bc92ac57-b752-54b3-b869-bcc340a27666", 00:14:40.063 "is_configured": true, 00:14:40.063 "data_offset": 2048, 00:14:40.063 "data_size": 63488 00:14:40.063 } 00:14:40.063 ] 00:14:40.063 }' 00:14:40.063 11:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.063 11:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:40.063 11:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.063 11:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:40.063 11:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:40.063 11:24:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.063 11:24:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.063 [2024-11-20 11:24:23.112695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:40.063 11:24:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.063 11:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:40.322 [2024-11-20 11:24:23.185026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:40.322 [2024-11-20 11:24:23.187087] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:40.322 [2024-11-20 11:24:23.302125] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:40.322 [2024-11-20 11:24:23.302764] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:40.322 [2024-11-20 11:24:23.430693] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:40.322 [2024-11-20 11:24:23.431051] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:40.842 134.67 IOPS, 404.00 MiB/s [2024-11-20T11:24:23.958Z] [2024-11-20 11:24:23.754797] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:41.101 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.101 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.101 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.101 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.101 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.101 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.101 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.101 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.101 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.101 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.101 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.101 "name": "raid_bdev1", 00:14:41.101 "uuid": "287f4541-468e-481c-abf6-f8648cfdefdb", 00:14:41.101 "strip_size_kb": 0, 00:14:41.101 "state": "online", 00:14:41.101 "raid_level": "raid1", 00:14:41.101 "superblock": true, 00:14:41.101 "num_base_bdevs": 2, 00:14:41.101 "num_base_bdevs_discovered": 2, 00:14:41.101 "num_base_bdevs_operational": 2, 00:14:41.101 "process": { 00:14:41.101 "type": "rebuild", 00:14:41.101 "target": "spare", 00:14:41.101 "progress": { 00:14:41.101 "blocks": 12288, 00:14:41.101 "percent": 19 00:14:41.101 } 00:14:41.101 }, 00:14:41.101 "base_bdevs_list": [ 00:14:41.101 { 00:14:41.101 "name": "spare", 00:14:41.101 "uuid": "35aa1a7d-8432-560f-ad9a-3b356c9167ca", 00:14:41.101 "is_configured": true, 00:14:41.101 "data_offset": 2048, 00:14:41.101 "data_size": 63488 00:14:41.101 }, 00:14:41.101 { 00:14:41.101 "name": "BaseBdev2", 00:14:41.101 "uuid": "bc92ac57-b752-54b3-b869-bcc340a27666", 00:14:41.101 "is_configured": true, 00:14:41.101 "data_offset": 2048, 00:14:41.101 "data_size": 63488 00:14:41.101 } 00:14:41.101 ] 00:14:41.101 }' 00:14:41.101 [2024-11-20 11:24:24.213158] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:41.101 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:41.361 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=430 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.361 "name": "raid_bdev1", 00:14:41.361 "uuid": "287f4541-468e-481c-abf6-f8648cfdefdb", 00:14:41.361 "strip_size_kb": 0, 00:14:41.361 "state": "online", 00:14:41.361 "raid_level": "raid1", 00:14:41.361 "superblock": true, 00:14:41.361 "num_base_bdevs": 2, 00:14:41.361 "num_base_bdevs_discovered": 2, 00:14:41.361 "num_base_bdevs_operational": 2, 00:14:41.361 "process": { 00:14:41.361 "type": "rebuild", 00:14:41.361 "target": "spare", 00:14:41.361 "progress": { 00:14:41.361 "blocks": 14336, 00:14:41.361 "percent": 22 00:14:41.361 } 00:14:41.361 }, 00:14:41.361 "base_bdevs_list": [ 00:14:41.361 { 00:14:41.361 "name": "spare", 00:14:41.361 "uuid": "35aa1a7d-8432-560f-ad9a-3b356c9167ca", 00:14:41.361 "is_configured": true, 00:14:41.361 "data_offset": 2048, 00:14:41.361 "data_size": 63488 00:14:41.361 }, 00:14:41.361 { 00:14:41.361 "name": "BaseBdev2", 00:14:41.361 "uuid": "bc92ac57-b752-54b3-b869-bcc340a27666", 00:14:41.361 "is_configured": true, 00:14:41.361 "data_offset": 2048, 00:14:41.361 "data_size": 63488 00:14:41.361 } 00:14:41.361 ] 00:14:41.361 }' 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.361 11:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:41.361 [2024-11-20 11:24:24.459600] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:41.361 [2024-11-20 11:24:24.459934] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:41.880 121.25 IOPS, 363.75 MiB/s [2024-11-20T11:24:24.996Z] [2024-11-20 11:24:24.877105] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:42.140 [2024-11-20 11:24:25.226272] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:42.400 [2024-11-20 11:24:25.335732] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:42.400 [2024-11-20 11:24:25.336084] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:42.400 11:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:42.400 11:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.400 11:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.400 11:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.400 11:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.400 11:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.400 11:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.400 11:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.400 11:24:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.400 11:24:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.400 11:24:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.400 11:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.400 "name": "raid_bdev1", 00:14:42.400 "uuid": "287f4541-468e-481c-abf6-f8648cfdefdb", 00:14:42.400 "strip_size_kb": 0, 00:14:42.400 "state": "online", 00:14:42.400 "raid_level": "raid1", 00:14:42.400 "superblock": true, 00:14:42.400 "num_base_bdevs": 2, 00:14:42.400 "num_base_bdevs_discovered": 2, 00:14:42.400 "num_base_bdevs_operational": 2, 00:14:42.400 "process": { 00:14:42.400 "type": "rebuild", 00:14:42.400 "target": "spare", 00:14:42.400 "progress": { 00:14:42.400 "blocks": 28672, 00:14:42.400 "percent": 45 00:14:42.400 } 00:14:42.400 }, 00:14:42.400 "base_bdevs_list": [ 00:14:42.400 { 00:14:42.400 "name": "spare", 00:14:42.400 "uuid": "35aa1a7d-8432-560f-ad9a-3b356c9167ca", 00:14:42.400 "is_configured": true, 00:14:42.400 "data_offset": 2048, 00:14:42.400 "data_size": 63488 00:14:42.400 }, 00:14:42.400 { 00:14:42.400 "name": "BaseBdev2", 00:14:42.400 "uuid": "bc92ac57-b752-54b3-b869-bcc340a27666", 00:14:42.400 "is_configured": true, 00:14:42.400 "data_offset": 2048, 00:14:42.400 "data_size": 63488 00:14:42.400 } 00:14:42.400 ] 00:14:42.400 }' 00:14:42.400 11:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.660 11:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.660 11:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.660 11:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.660 11:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:42.660 [2024-11-20 11:24:25.584655] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:42.920 113.80 IOPS, 341.40 MiB/s [2024-11-20T11:24:26.036Z] [2024-11-20 11:24:25.795159] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:42.920 [2024-11-20 11:24:25.795555] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:43.179 [2024-11-20 11:24:26.050525] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:43.438 [2024-11-20 11:24:26.398229] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:43.697 11:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:43.697 11:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.697 11:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.697 11:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.697 11:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.697 11:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.697 11:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.697 11:24:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.697 11:24:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.697 11:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.697 11:24:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.697 [2024-11-20 11:24:26.615373] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:43.697 11:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.697 "name": "raid_bdev1", 00:14:43.697 "uuid": "287f4541-468e-481c-abf6-f8648cfdefdb", 00:14:43.697 "strip_size_kb": 0, 00:14:43.697 "state": "online", 00:14:43.697 "raid_level": "raid1", 00:14:43.697 "superblock": true, 00:14:43.697 "num_base_bdevs": 2, 00:14:43.697 "num_base_bdevs_discovered": 2, 00:14:43.697 "num_base_bdevs_operational": 2, 00:14:43.697 "process": { 00:14:43.697 "type": "rebuild", 00:14:43.697 "target": "spare", 00:14:43.697 "progress": { 00:14:43.697 "blocks": 45056, 00:14:43.697 "percent": 70 00:14:43.697 } 00:14:43.697 }, 00:14:43.697 "base_bdevs_list": [ 00:14:43.697 { 00:14:43.697 "name": "spare", 00:14:43.697 "uuid": "35aa1a7d-8432-560f-ad9a-3b356c9167ca", 00:14:43.697 "is_configured": true, 00:14:43.697 "data_offset": 2048, 00:14:43.697 "data_size": 63488 00:14:43.697 }, 00:14:43.697 { 00:14:43.697 "name": "BaseBdev2", 00:14:43.697 "uuid": "bc92ac57-b752-54b3-b869-bcc340a27666", 00:14:43.697 "is_configured": true, 00:14:43.697 "data_offset": 2048, 00:14:43.697 "data_size": 63488 00:14:43.697 } 00:14:43.697 ] 00:14:43.697 }' 00:14:43.697 11:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.697 11:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.697 11:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.697 103.00 IOPS, 309.00 MiB/s [2024-11-20T11:24:26.813Z] 11:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.697 11:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:43.957 [2024-11-20 11:24:26.943039] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:44.216 [2024-11-20 11:24:27.276391] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:44.784 [2024-11-20 11:24:27.609788] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:44.784 92.29 IOPS, 276.86 MiB/s [2024-11-20T11:24:27.900Z] 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:44.784 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.784 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.784 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.784 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.784 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.784 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.784 [2024-11-20 11:24:27.709561] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:44.784 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.784 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.784 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.784 [2024-11-20 11:24:27.712005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.784 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.784 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.784 "name": "raid_bdev1", 00:14:44.784 "uuid": "287f4541-468e-481c-abf6-f8648cfdefdb", 00:14:44.784 "strip_size_kb": 0, 00:14:44.784 "state": "online", 00:14:44.784 "raid_level": "raid1", 00:14:44.784 "superblock": true, 00:14:44.784 "num_base_bdevs": 2, 00:14:44.785 "num_base_bdevs_discovered": 2, 00:14:44.785 "num_base_bdevs_operational": 2, 00:14:44.785 "base_bdevs_list": [ 00:14:44.785 { 00:14:44.785 "name": "spare", 00:14:44.785 "uuid": "35aa1a7d-8432-560f-ad9a-3b356c9167ca", 00:14:44.785 "is_configured": true, 00:14:44.785 "data_offset": 2048, 00:14:44.785 "data_size": 63488 00:14:44.785 }, 00:14:44.785 { 00:14:44.785 "name": "BaseBdev2", 00:14:44.785 "uuid": "bc92ac57-b752-54b3-b869-bcc340a27666", 00:14:44.785 "is_configured": true, 00:14:44.785 "data_offset": 2048, 00:14:44.785 "data_size": 63488 00:14:44.785 } 00:14:44.785 ] 00:14:44.785 }' 00:14:44.785 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.785 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:44.785 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.785 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:44.785 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:44.785 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:44.785 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.785 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:44.785 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:44.785 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.785 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.785 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.785 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.785 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.785 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.045 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.045 "name": "raid_bdev1", 00:14:45.045 "uuid": "287f4541-468e-481c-abf6-f8648cfdefdb", 00:14:45.045 "strip_size_kb": 0, 00:14:45.045 "state": "online", 00:14:45.045 "raid_level": "raid1", 00:14:45.045 "superblock": true, 00:14:45.045 "num_base_bdevs": 2, 00:14:45.045 "num_base_bdevs_discovered": 2, 00:14:45.045 "num_base_bdevs_operational": 2, 00:14:45.045 "base_bdevs_list": [ 00:14:45.045 { 00:14:45.045 "name": "spare", 00:14:45.045 "uuid": "35aa1a7d-8432-560f-ad9a-3b356c9167ca", 00:14:45.045 "is_configured": true, 00:14:45.045 "data_offset": 2048, 00:14:45.045 "data_size": 63488 00:14:45.045 }, 00:14:45.045 { 00:14:45.045 "name": "BaseBdev2", 00:14:45.045 "uuid": "bc92ac57-b752-54b3-b869-bcc340a27666", 00:14:45.045 "is_configured": true, 00:14:45.045 "data_offset": 2048, 00:14:45.045 "data_size": 63488 00:14:45.045 } 00:14:45.045 ] 00:14:45.045 }' 00:14:45.045 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.045 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:45.045 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.045 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:45.045 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:45.045 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.045 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.045 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.045 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.045 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:45.045 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.045 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.045 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.045 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.045 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.045 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.045 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.045 11:24:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.045 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.045 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.045 "name": "raid_bdev1", 00:14:45.045 "uuid": "287f4541-468e-481c-abf6-f8648cfdefdb", 00:14:45.045 "strip_size_kb": 0, 00:14:45.045 "state": "online", 00:14:45.045 "raid_level": "raid1", 00:14:45.045 "superblock": true, 00:14:45.045 "num_base_bdevs": 2, 00:14:45.045 "num_base_bdevs_discovered": 2, 00:14:45.045 "num_base_bdevs_operational": 2, 00:14:45.045 "base_bdevs_list": [ 00:14:45.045 { 00:14:45.045 "name": "spare", 00:14:45.045 "uuid": "35aa1a7d-8432-560f-ad9a-3b356c9167ca", 00:14:45.045 "is_configured": true, 00:14:45.045 "data_offset": 2048, 00:14:45.045 "data_size": 63488 00:14:45.045 }, 00:14:45.045 { 00:14:45.045 "name": "BaseBdev2", 00:14:45.045 "uuid": "bc92ac57-b752-54b3-b869-bcc340a27666", 00:14:45.045 "is_configured": true, 00:14:45.045 "data_offset": 2048, 00:14:45.045 "data_size": 63488 00:14:45.045 } 00:14:45.045 ] 00:14:45.045 }' 00:14:45.045 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.045 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.621 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:45.621 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.621 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.621 [2024-11-20 11:24:28.447942] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:45.621 [2024-11-20 11:24:28.447980] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:45.621 00:14:45.621 Latency(us) 00:14:45.621 [2024-11-20T11:24:28.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.621 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:45.621 raid_bdev1 : 7.86 85.55 256.64 0.00 0.00 15147.93 320.17 117220.72 00:14:45.621 [2024-11-20T11:24:28.737Z] =================================================================================================================== 00:14:45.621 [2024-11-20T11:24:28.737Z] Total : 85.55 256.64 0.00 0.00 15147.93 320.17 117220.72 00:14:45.621 [2024-11-20 11:24:28.537370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.621 [2024-11-20 11:24:28.537429] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.621 [2024-11-20 11:24:28.537533] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.621 [2024-11-20 11:24:28.537549] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:45.621 { 00:14:45.621 "results": [ 00:14:45.621 { 00:14:45.621 "job": "raid_bdev1", 00:14:45.621 "core_mask": "0x1", 00:14:45.621 "workload": "randrw", 00:14:45.621 "percentage": 50, 00:14:45.621 "status": "finished", 00:14:45.621 "queue_depth": 2, 00:14:45.621 "io_size": 3145728, 00:14:45.621 "runtime": 7.85531, 00:14:45.621 "iops": 85.54722856259015, 00:14:45.621 "mibps": 256.64168568777046, 00:14:45.621 "io_failed": 0, 00:14:45.621 "io_timeout": 0, 00:14:45.621 "avg_latency_us": 15147.930962778124, 00:14:45.621 "min_latency_us": 320.16768558951964, 00:14:45.621 "max_latency_us": 117220.7231441048 00:14:45.621 } 00:14:45.621 ], 00:14:45.621 "core_count": 1 00:14:45.621 } 00:14:45.621 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.621 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.621 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.621 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.621 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:45.621 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.621 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:45.621 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:45.621 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:45.621 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:45.621 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.621 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:45.621 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:45.621 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:45.621 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:45.621 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:45.621 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:45.622 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.622 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:45.896 /dev/nbd0 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:45.896 1+0 records in 00:14:45.896 1+0 records out 00:14:45.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414966 s, 9.9 MB/s 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.896 11:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:46.156 /dev/nbd1 00:14:46.156 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:46.156 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:46.156 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:46.156 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:46.156 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:46.156 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:46.156 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:46.156 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:46.156 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:46.156 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:46.156 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:46.156 1+0 records in 00:14:46.156 1+0 records out 00:14:46.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339746 s, 12.1 MB/s 00:14:46.156 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.156 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:46.156 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.156 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:46.156 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:46.156 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:46.156 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:46.156 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:46.415 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:46.415 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:46.415 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:46.415 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:46.415 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:46.415 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:46.415 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.675 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.676 [2024-11-20 11:24:29.789297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:46.676 [2024-11-20 11:24:29.789362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.676 [2024-11-20 11:24:29.789384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:46.676 [2024-11-20 11:24:29.789397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.935 [2024-11-20 11:24:29.791960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.935 [2024-11-20 11:24:29.792069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:46.935 [2024-11-20 11:24:29.792222] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:46.935 [2024-11-20 11:24:29.792331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:46.935 [2024-11-20 11:24:29.792550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:46.935 spare 00:14:46.935 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.935 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:46.935 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.935 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.935 [2024-11-20 11:24:29.892527] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:46.935 [2024-11-20 11:24:29.892644] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:46.935 [2024-11-20 11:24:29.893084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:14:46.935 [2024-11-20 11:24:29.893319] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:46.935 [2024-11-20 11:24:29.893366] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:46.935 [2024-11-20 11:24:29.893638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.935 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.935 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:46.935 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.935 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.935 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.935 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.935 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:46.935 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.935 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.935 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.935 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.935 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.935 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.935 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.935 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.935 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.935 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.935 "name": "raid_bdev1", 00:14:46.935 "uuid": "287f4541-468e-481c-abf6-f8648cfdefdb", 00:14:46.935 "strip_size_kb": 0, 00:14:46.935 "state": "online", 00:14:46.935 "raid_level": "raid1", 00:14:46.935 "superblock": true, 00:14:46.935 "num_base_bdevs": 2, 00:14:46.935 "num_base_bdevs_discovered": 2, 00:14:46.935 "num_base_bdevs_operational": 2, 00:14:46.935 "base_bdevs_list": [ 00:14:46.935 { 00:14:46.935 "name": "spare", 00:14:46.935 "uuid": "35aa1a7d-8432-560f-ad9a-3b356c9167ca", 00:14:46.935 "is_configured": true, 00:14:46.935 "data_offset": 2048, 00:14:46.935 "data_size": 63488 00:14:46.935 }, 00:14:46.935 { 00:14:46.935 "name": "BaseBdev2", 00:14:46.935 "uuid": "bc92ac57-b752-54b3-b869-bcc340a27666", 00:14:46.935 "is_configured": true, 00:14:46.935 "data_offset": 2048, 00:14:46.935 "data_size": 63488 00:14:46.935 } 00:14:46.935 ] 00:14:46.935 }' 00:14:46.935 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.935 11:24:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.503 "name": "raid_bdev1", 00:14:47.503 "uuid": "287f4541-468e-481c-abf6-f8648cfdefdb", 00:14:47.503 "strip_size_kb": 0, 00:14:47.503 "state": "online", 00:14:47.503 "raid_level": "raid1", 00:14:47.503 "superblock": true, 00:14:47.503 "num_base_bdevs": 2, 00:14:47.503 "num_base_bdevs_discovered": 2, 00:14:47.503 "num_base_bdevs_operational": 2, 00:14:47.503 "base_bdevs_list": [ 00:14:47.503 { 00:14:47.503 "name": "spare", 00:14:47.503 "uuid": "35aa1a7d-8432-560f-ad9a-3b356c9167ca", 00:14:47.503 "is_configured": true, 00:14:47.503 "data_offset": 2048, 00:14:47.503 "data_size": 63488 00:14:47.503 }, 00:14:47.503 { 00:14:47.503 "name": "BaseBdev2", 00:14:47.503 "uuid": "bc92ac57-b752-54b3-b869-bcc340a27666", 00:14:47.503 "is_configured": true, 00:14:47.503 "data_offset": 2048, 00:14:47.503 "data_size": 63488 00:14:47.503 } 00:14:47.503 ] 00:14:47.503 }' 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.503 [2024-11-20 11:24:30.540635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.503 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.504 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:47.504 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.504 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.504 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.504 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.504 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.504 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.504 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.504 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.504 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.504 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.504 "name": "raid_bdev1", 00:14:47.504 "uuid": "287f4541-468e-481c-abf6-f8648cfdefdb", 00:14:47.504 "strip_size_kb": 0, 00:14:47.504 "state": "online", 00:14:47.504 "raid_level": "raid1", 00:14:47.504 "superblock": true, 00:14:47.504 "num_base_bdevs": 2, 00:14:47.504 "num_base_bdevs_discovered": 1, 00:14:47.504 "num_base_bdevs_operational": 1, 00:14:47.504 "base_bdevs_list": [ 00:14:47.504 { 00:14:47.504 "name": null, 00:14:47.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.504 "is_configured": false, 00:14:47.504 "data_offset": 0, 00:14:47.504 "data_size": 63488 00:14:47.504 }, 00:14:47.504 { 00:14:47.504 "name": "BaseBdev2", 00:14:47.504 "uuid": "bc92ac57-b752-54b3-b869-bcc340a27666", 00:14:47.504 "is_configured": true, 00:14:47.504 "data_offset": 2048, 00:14:47.504 "data_size": 63488 00:14:47.504 } 00:14:47.504 ] 00:14:47.504 }' 00:14:47.504 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.504 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.072 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:48.072 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.072 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.072 [2024-11-20 11:24:30.908082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:48.072 [2024-11-20 11:24:30.908315] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:48.072 [2024-11-20 11:24:30.908331] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:48.072 [2024-11-20 11:24:30.908377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:48.072 [2024-11-20 11:24:30.925539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:14:48.072 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.072 11:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:48.072 [2024-11-20 11:24:30.927565] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:49.009 11:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.009 11:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.009 11:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.009 11:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.009 11:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.009 11:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.009 11:24:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.009 11:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.009 11:24:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.009 11:24:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.009 11:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.009 "name": "raid_bdev1", 00:14:49.009 "uuid": "287f4541-468e-481c-abf6-f8648cfdefdb", 00:14:49.009 "strip_size_kb": 0, 00:14:49.009 "state": "online", 00:14:49.009 "raid_level": "raid1", 00:14:49.009 "superblock": true, 00:14:49.009 "num_base_bdevs": 2, 00:14:49.009 "num_base_bdevs_discovered": 2, 00:14:49.009 "num_base_bdevs_operational": 2, 00:14:49.009 "process": { 00:14:49.009 "type": "rebuild", 00:14:49.009 "target": "spare", 00:14:49.009 "progress": { 00:14:49.009 "blocks": 20480, 00:14:49.009 "percent": 32 00:14:49.009 } 00:14:49.009 }, 00:14:49.009 "base_bdevs_list": [ 00:14:49.009 { 00:14:49.009 "name": "spare", 00:14:49.009 "uuid": "35aa1a7d-8432-560f-ad9a-3b356c9167ca", 00:14:49.009 "is_configured": true, 00:14:49.009 "data_offset": 2048, 00:14:49.009 "data_size": 63488 00:14:49.009 }, 00:14:49.009 { 00:14:49.009 "name": "BaseBdev2", 00:14:49.009 "uuid": "bc92ac57-b752-54b3-b869-bcc340a27666", 00:14:49.009 "is_configured": true, 00:14:49.009 "data_offset": 2048, 00:14:49.009 "data_size": 63488 00:14:49.009 } 00:14:49.009 ] 00:14:49.009 }' 00:14:49.009 11:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.009 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.009 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.009 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.009 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:49.009 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.009 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.009 [2024-11-20 11:24:32.055725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:49.268 [2024-11-20 11:24:32.133629] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:49.268 [2024-11-20 11:24:32.133721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.268 [2024-11-20 11:24:32.133740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:49.268 [2024-11-20 11:24:32.133747] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:49.268 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.269 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:49.269 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.269 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.269 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.269 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.269 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:49.269 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.269 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.269 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.269 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.269 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.269 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.269 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.269 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.269 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.269 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.269 "name": "raid_bdev1", 00:14:49.269 "uuid": "287f4541-468e-481c-abf6-f8648cfdefdb", 00:14:49.269 "strip_size_kb": 0, 00:14:49.269 "state": "online", 00:14:49.269 "raid_level": "raid1", 00:14:49.269 "superblock": true, 00:14:49.269 "num_base_bdevs": 2, 00:14:49.269 "num_base_bdevs_discovered": 1, 00:14:49.269 "num_base_bdevs_operational": 1, 00:14:49.269 "base_bdevs_list": [ 00:14:49.269 { 00:14:49.269 "name": null, 00:14:49.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.269 "is_configured": false, 00:14:49.269 "data_offset": 0, 00:14:49.269 "data_size": 63488 00:14:49.269 }, 00:14:49.269 { 00:14:49.269 "name": "BaseBdev2", 00:14:49.269 "uuid": "bc92ac57-b752-54b3-b869-bcc340a27666", 00:14:49.269 "is_configured": true, 00:14:49.269 "data_offset": 2048, 00:14:49.269 "data_size": 63488 00:14:49.269 } 00:14:49.269 ] 00:14:49.269 }' 00:14:49.269 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.269 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.837 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:49.837 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.837 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.837 [2024-11-20 11:24:32.663789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:49.837 [2024-11-20 11:24:32.663937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.837 [2024-11-20 11:24:32.664000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:49.837 [2024-11-20 11:24:32.664034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.837 [2024-11-20 11:24:32.664606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.837 [2024-11-20 11:24:32.664668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:49.837 [2024-11-20 11:24:32.664814] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:49.837 [2024-11-20 11:24:32.664861] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:49.837 [2024-11-20 11:24:32.664907] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:49.837 [2024-11-20 11:24:32.664946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:49.837 [2024-11-20 11:24:32.682757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:14:49.837 spare 00:14:49.837 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.837 [2024-11-20 11:24:32.684919] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:49.837 11:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:50.774 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.774 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.774 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.774 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.774 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.774 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.774 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.774 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.774 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.774 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.774 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.774 "name": "raid_bdev1", 00:14:50.774 "uuid": "287f4541-468e-481c-abf6-f8648cfdefdb", 00:14:50.774 "strip_size_kb": 0, 00:14:50.774 "state": "online", 00:14:50.774 "raid_level": "raid1", 00:14:50.774 "superblock": true, 00:14:50.775 "num_base_bdevs": 2, 00:14:50.775 "num_base_bdevs_discovered": 2, 00:14:50.775 "num_base_bdevs_operational": 2, 00:14:50.775 "process": { 00:14:50.775 "type": "rebuild", 00:14:50.775 "target": "spare", 00:14:50.775 "progress": { 00:14:50.775 "blocks": 20480, 00:14:50.775 "percent": 32 00:14:50.775 } 00:14:50.775 }, 00:14:50.775 "base_bdevs_list": [ 00:14:50.775 { 00:14:50.775 "name": "spare", 00:14:50.775 "uuid": "35aa1a7d-8432-560f-ad9a-3b356c9167ca", 00:14:50.775 "is_configured": true, 00:14:50.775 "data_offset": 2048, 00:14:50.775 "data_size": 63488 00:14:50.775 }, 00:14:50.775 { 00:14:50.775 "name": "BaseBdev2", 00:14:50.775 "uuid": "bc92ac57-b752-54b3-b869-bcc340a27666", 00:14:50.775 "is_configured": true, 00:14:50.775 "data_offset": 2048, 00:14:50.775 "data_size": 63488 00:14:50.775 } 00:14:50.775 ] 00:14:50.775 }' 00:14:50.775 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.775 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.775 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.775 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.775 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:50.775 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.775 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.775 [2024-11-20 11:24:33.852722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.034 [2024-11-20 11:24:33.891062] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:51.034 [2024-11-20 11:24:33.891153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.034 [2024-11-20 11:24:33.891171] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.034 [2024-11-20 11:24:33.891181] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:51.034 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.034 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:51.034 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.035 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.035 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.035 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.035 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:51.035 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.035 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.035 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.035 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.035 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.035 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.035 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.035 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.035 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.035 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.035 "name": "raid_bdev1", 00:14:51.035 "uuid": "287f4541-468e-481c-abf6-f8648cfdefdb", 00:14:51.035 "strip_size_kb": 0, 00:14:51.035 "state": "online", 00:14:51.035 "raid_level": "raid1", 00:14:51.035 "superblock": true, 00:14:51.035 "num_base_bdevs": 2, 00:14:51.035 "num_base_bdevs_discovered": 1, 00:14:51.035 "num_base_bdevs_operational": 1, 00:14:51.035 "base_bdevs_list": [ 00:14:51.035 { 00:14:51.035 "name": null, 00:14:51.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.035 "is_configured": false, 00:14:51.035 "data_offset": 0, 00:14:51.035 "data_size": 63488 00:14:51.035 }, 00:14:51.035 { 00:14:51.035 "name": "BaseBdev2", 00:14:51.035 "uuid": "bc92ac57-b752-54b3-b869-bcc340a27666", 00:14:51.035 "is_configured": true, 00:14:51.035 "data_offset": 2048, 00:14:51.035 "data_size": 63488 00:14:51.035 } 00:14:51.035 ] 00:14:51.035 }' 00:14:51.035 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.035 11:24:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.294 11:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:51.294 11:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.294 11:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:51.294 11:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:51.294 11:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.294 11:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.294 11:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.294 11:24:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.294 11:24:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.554 11:24:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.554 11:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.554 "name": "raid_bdev1", 00:14:51.554 "uuid": "287f4541-468e-481c-abf6-f8648cfdefdb", 00:14:51.554 "strip_size_kb": 0, 00:14:51.554 "state": "online", 00:14:51.554 "raid_level": "raid1", 00:14:51.554 "superblock": true, 00:14:51.554 "num_base_bdevs": 2, 00:14:51.554 "num_base_bdevs_discovered": 1, 00:14:51.554 "num_base_bdevs_operational": 1, 00:14:51.554 "base_bdevs_list": [ 00:14:51.554 { 00:14:51.554 "name": null, 00:14:51.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.554 "is_configured": false, 00:14:51.554 "data_offset": 0, 00:14:51.554 "data_size": 63488 00:14:51.554 }, 00:14:51.554 { 00:14:51.554 "name": "BaseBdev2", 00:14:51.554 "uuid": "bc92ac57-b752-54b3-b869-bcc340a27666", 00:14:51.554 "is_configured": true, 00:14:51.554 "data_offset": 2048, 00:14:51.554 "data_size": 63488 00:14:51.554 } 00:14:51.554 ] 00:14:51.554 }' 00:14:51.554 11:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.554 11:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:51.554 11:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.554 11:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:51.554 11:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:51.554 11:24:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.554 11:24:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.554 11:24:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.554 11:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:51.554 11:24:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.554 11:24:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.554 [2024-11-20 11:24:34.540675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:51.554 [2024-11-20 11:24:34.540744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.554 [2024-11-20 11:24:34.540768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:51.554 [2024-11-20 11:24:34.540779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.554 [2024-11-20 11:24:34.541251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.554 [2024-11-20 11:24:34.541274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:51.554 [2024-11-20 11:24:34.541358] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:51.554 [2024-11-20 11:24:34.541380] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:51.554 [2024-11-20 11:24:34.541388] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:51.554 [2024-11-20 11:24:34.541401] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:51.554 BaseBdev1 00:14:51.554 11:24:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.554 11:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:52.493 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:52.493 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.493 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.493 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.493 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.493 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:52.493 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.493 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.493 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.493 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.493 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.493 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.493 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.493 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.493 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.493 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.493 "name": "raid_bdev1", 00:14:52.493 "uuid": "287f4541-468e-481c-abf6-f8648cfdefdb", 00:14:52.493 "strip_size_kb": 0, 00:14:52.493 "state": "online", 00:14:52.493 "raid_level": "raid1", 00:14:52.493 "superblock": true, 00:14:52.493 "num_base_bdevs": 2, 00:14:52.493 "num_base_bdevs_discovered": 1, 00:14:52.493 "num_base_bdevs_operational": 1, 00:14:52.493 "base_bdevs_list": [ 00:14:52.493 { 00:14:52.493 "name": null, 00:14:52.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.493 "is_configured": false, 00:14:52.493 "data_offset": 0, 00:14:52.493 "data_size": 63488 00:14:52.493 }, 00:14:52.493 { 00:14:52.493 "name": "BaseBdev2", 00:14:52.493 "uuid": "bc92ac57-b752-54b3-b869-bcc340a27666", 00:14:52.493 "is_configured": true, 00:14:52.493 "data_offset": 2048, 00:14:52.493 "data_size": 63488 00:14:52.493 } 00:14:52.493 ] 00:14:52.493 }' 00:14:52.493 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.493 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.063 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:53.063 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.063 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:53.063 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:53.063 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.063 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.063 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.063 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.063 11:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.063 11:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.063 11:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.063 "name": "raid_bdev1", 00:14:53.063 "uuid": "287f4541-468e-481c-abf6-f8648cfdefdb", 00:14:53.063 "strip_size_kb": 0, 00:14:53.063 "state": "online", 00:14:53.063 "raid_level": "raid1", 00:14:53.063 "superblock": true, 00:14:53.063 "num_base_bdevs": 2, 00:14:53.063 "num_base_bdevs_discovered": 1, 00:14:53.063 "num_base_bdevs_operational": 1, 00:14:53.063 "base_bdevs_list": [ 00:14:53.063 { 00:14:53.063 "name": null, 00:14:53.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.063 "is_configured": false, 00:14:53.063 "data_offset": 0, 00:14:53.063 "data_size": 63488 00:14:53.063 }, 00:14:53.063 { 00:14:53.063 "name": "BaseBdev2", 00:14:53.063 "uuid": "bc92ac57-b752-54b3-b869-bcc340a27666", 00:14:53.063 "is_configured": true, 00:14:53.063 "data_offset": 2048, 00:14:53.063 "data_size": 63488 00:14:53.063 } 00:14:53.063 ] 00:14:53.063 }' 00:14:53.063 11:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.063 11:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:53.063 11:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.063 11:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:53.063 11:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:53.063 11:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:53.063 11:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:53.063 11:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:53.063 11:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.063 11:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:53.063 11:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.063 11:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:53.063 11:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.063 11:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.063 [2024-11-20 11:24:36.134348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:53.063 [2024-11-20 11:24:36.134549] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:53.063 [2024-11-20 11:24:36.134564] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:53.063 request: 00:14:53.063 { 00:14:53.063 "base_bdev": "BaseBdev1", 00:14:53.063 "raid_bdev": "raid_bdev1", 00:14:53.063 "method": "bdev_raid_add_base_bdev", 00:14:53.063 "req_id": 1 00:14:53.063 } 00:14:53.063 Got JSON-RPC error response 00:14:53.063 response: 00:14:53.063 { 00:14:53.063 "code": -22, 00:14:53.063 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:53.063 } 00:14:53.063 11:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:53.063 11:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:53.064 11:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:53.064 11:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:53.064 11:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:53.064 11:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:54.445 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:54.445 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.445 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.445 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.445 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.445 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:54.445 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.445 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.445 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.445 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.445 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.445 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.445 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.445 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.445 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.445 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.445 "name": "raid_bdev1", 00:14:54.445 "uuid": "287f4541-468e-481c-abf6-f8648cfdefdb", 00:14:54.445 "strip_size_kb": 0, 00:14:54.445 "state": "online", 00:14:54.445 "raid_level": "raid1", 00:14:54.445 "superblock": true, 00:14:54.445 "num_base_bdevs": 2, 00:14:54.445 "num_base_bdevs_discovered": 1, 00:14:54.445 "num_base_bdevs_operational": 1, 00:14:54.445 "base_bdevs_list": [ 00:14:54.445 { 00:14:54.445 "name": null, 00:14:54.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.445 "is_configured": false, 00:14:54.445 "data_offset": 0, 00:14:54.445 "data_size": 63488 00:14:54.445 }, 00:14:54.445 { 00:14:54.445 "name": "BaseBdev2", 00:14:54.445 "uuid": "bc92ac57-b752-54b3-b869-bcc340a27666", 00:14:54.445 "is_configured": true, 00:14:54.445 "data_offset": 2048, 00:14:54.445 "data_size": 63488 00:14:54.445 } 00:14:54.445 ] 00:14:54.445 }' 00:14:54.445 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.445 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.704 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:54.704 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.704 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:54.704 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:54.704 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.704 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.704 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.704 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.704 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.704 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.704 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.704 "name": "raid_bdev1", 00:14:54.704 "uuid": "287f4541-468e-481c-abf6-f8648cfdefdb", 00:14:54.704 "strip_size_kb": 0, 00:14:54.704 "state": "online", 00:14:54.704 "raid_level": "raid1", 00:14:54.704 "superblock": true, 00:14:54.704 "num_base_bdevs": 2, 00:14:54.704 "num_base_bdevs_discovered": 1, 00:14:54.704 "num_base_bdevs_operational": 1, 00:14:54.704 "base_bdevs_list": [ 00:14:54.704 { 00:14:54.704 "name": null, 00:14:54.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.704 "is_configured": false, 00:14:54.704 "data_offset": 0, 00:14:54.704 "data_size": 63488 00:14:54.704 }, 00:14:54.704 { 00:14:54.704 "name": "BaseBdev2", 00:14:54.704 "uuid": "bc92ac57-b752-54b3-b869-bcc340a27666", 00:14:54.704 "is_configured": true, 00:14:54.704 "data_offset": 2048, 00:14:54.704 "data_size": 63488 00:14:54.704 } 00:14:54.704 ] 00:14:54.704 }' 00:14:54.704 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.704 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:54.704 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.704 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:54.704 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77015 00:14:54.704 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77015 ']' 00:14:54.704 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77015 00:14:54.705 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:54.705 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:54.705 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77015 00:14:54.705 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:54.705 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:54.705 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77015' 00:14:54.705 killing process with pid 77015 00:14:54.705 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77015 00:14:54.705 Received shutdown signal, test time was about 17.164336 seconds 00:14:54.705 00:14:54.705 Latency(us) 00:14:54.705 [2024-11-20T11:24:37.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.705 [2024-11-20T11:24:37.821Z] =================================================================================================================== 00:14:54.705 [2024-11-20T11:24:37.821Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:54.705 [2024-11-20 11:24:37.802879] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:54.705 [2024-11-20 11:24:37.803018] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.705 11:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77015 00:14:54.705 [2024-11-20 11:24:37.803081] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:54.705 [2024-11-20 11:24:37.803092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:54.964 [2024-11-20 11:24:38.057653] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:56.342 00:14:56.342 real 0m20.404s 00:14:56.342 user 0m26.639s 00:14:56.342 sys 0m2.152s 00:14:56.342 ************************************ 00:14:56.342 END TEST raid_rebuild_test_sb_io 00:14:56.342 ************************************ 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.342 11:24:39 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:56.342 11:24:39 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:14:56.342 11:24:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:56.342 11:24:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.342 11:24:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:56.342 ************************************ 00:14:56.342 START TEST raid_rebuild_test 00:14:56.342 ************************************ 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:56.342 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77705 00:14:56.343 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:56.343 11:24:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77705 00:14:56.343 11:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77705 ']' 00:14:56.343 11:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.343 11:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.343 11:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.343 11:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.343 11:24:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.601 [2024-11-20 11:24:39.459423] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:14:56.601 [2024-11-20 11:24:39.459657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:56.601 Zero copy mechanism will not be used. 00:14:56.601 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77705 ] 00:14:56.601 [2024-11-20 11:24:39.634005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.860 [2024-11-20 11:24:39.759198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.860 [2024-11-20 11:24:39.972862] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.860 [2024-11-20 11:24:39.972991] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.428 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:57.428 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:57.428 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:57.428 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:57.428 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.428 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.428 BaseBdev1_malloc 00:14:57.428 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.428 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:57.428 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.428 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.428 [2024-11-20 11:24:40.372657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:57.428 [2024-11-20 11:24:40.372729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.428 [2024-11-20 11:24:40.372756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:57.428 [2024-11-20 11:24:40.372767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.428 [2024-11-20 11:24:40.375008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.429 [2024-11-20 11:24:40.375052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:57.429 BaseBdev1 00:14:57.429 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.429 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:57.429 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:57.429 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.429 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.429 BaseBdev2_malloc 00:14:57.429 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.429 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:57.429 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.429 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.429 [2024-11-20 11:24:40.433431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:57.429 [2024-11-20 11:24:40.433533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.429 [2024-11-20 11:24:40.433558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:57.429 [2024-11-20 11:24:40.433569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.429 [2024-11-20 11:24:40.435818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.429 [2024-11-20 11:24:40.435861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:57.429 BaseBdev2 00:14:57.429 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.429 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:57.429 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:57.429 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.429 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.429 BaseBdev3_malloc 00:14:57.429 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.429 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:57.429 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.429 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.429 [2024-11-20 11:24:40.499730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:57.429 [2024-11-20 11:24:40.499824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.429 [2024-11-20 11:24:40.499855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:57.429 [2024-11-20 11:24:40.499868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.429 [2024-11-20 11:24:40.502195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.429 [2024-11-20 11:24:40.502325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:57.429 BaseBdev3 00:14:57.429 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.429 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:57.429 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:57.429 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.429 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.689 BaseBdev4_malloc 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.689 [2024-11-20 11:24:40.549662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:57.689 [2024-11-20 11:24:40.549809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.689 [2024-11-20 11:24:40.549834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:57.689 [2024-11-20 11:24:40.549845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.689 [2024-11-20 11:24:40.552150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.689 [2024-11-20 11:24:40.552195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:57.689 BaseBdev4 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.689 spare_malloc 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.689 spare_delay 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.689 [2024-11-20 11:24:40.617175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:57.689 [2024-11-20 11:24:40.617254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.689 [2024-11-20 11:24:40.617280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:57.689 [2024-11-20 11:24:40.617292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.689 [2024-11-20 11:24:40.619761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.689 [2024-11-20 11:24:40.619805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:57.689 spare 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.689 [2024-11-20 11:24:40.629219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:57.689 [2024-11-20 11:24:40.631414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:57.689 [2024-11-20 11:24:40.631526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:57.689 [2024-11-20 11:24:40.631591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:57.689 [2024-11-20 11:24:40.631698] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:57.689 [2024-11-20 11:24:40.631713] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:57.689 [2024-11-20 11:24:40.632048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:57.689 [2024-11-20 11:24:40.632274] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:57.689 [2024-11-20 11:24:40.632288] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:57.689 [2024-11-20 11:24:40.632544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.689 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.690 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.690 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.690 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.690 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.690 "name": "raid_bdev1", 00:14:57.690 "uuid": "a6034a1b-5c17-44ad-9e97-7117cd0c9a17", 00:14:57.690 "strip_size_kb": 0, 00:14:57.690 "state": "online", 00:14:57.690 "raid_level": "raid1", 00:14:57.690 "superblock": false, 00:14:57.690 "num_base_bdevs": 4, 00:14:57.690 "num_base_bdevs_discovered": 4, 00:14:57.690 "num_base_bdevs_operational": 4, 00:14:57.690 "base_bdevs_list": [ 00:14:57.690 { 00:14:57.690 "name": "BaseBdev1", 00:14:57.690 "uuid": "fb2ac303-ca65-5834-9f38-99b2ba66f7b0", 00:14:57.690 "is_configured": true, 00:14:57.690 "data_offset": 0, 00:14:57.690 "data_size": 65536 00:14:57.690 }, 00:14:57.690 { 00:14:57.690 "name": "BaseBdev2", 00:14:57.690 "uuid": "b162ac3a-d09a-569a-b88c-9714cf6bfe6e", 00:14:57.690 "is_configured": true, 00:14:57.690 "data_offset": 0, 00:14:57.690 "data_size": 65536 00:14:57.690 }, 00:14:57.690 { 00:14:57.690 "name": "BaseBdev3", 00:14:57.690 "uuid": "81384d12-b3b7-52b7-8c7f-bc0167d90c17", 00:14:57.690 "is_configured": true, 00:14:57.690 "data_offset": 0, 00:14:57.690 "data_size": 65536 00:14:57.690 }, 00:14:57.690 { 00:14:57.690 "name": "BaseBdev4", 00:14:57.690 "uuid": "faa2a233-ba64-57d6-b921-fd39df8d8803", 00:14:57.690 "is_configured": true, 00:14:57.690 "data_offset": 0, 00:14:57.690 "data_size": 65536 00:14:57.690 } 00:14:57.690 ] 00:14:57.690 }' 00:14:57.690 11:24:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.690 11:24:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.259 11:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:58.259 11:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:58.259 11:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.259 11:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.259 [2024-11-20 11:24:41.088838] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:58.259 11:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.259 11:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:58.259 11:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:58.259 11:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.259 11:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.259 11:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.259 11:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.259 11:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:58.259 11:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:58.259 11:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:58.259 11:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:58.259 11:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:58.259 11:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:58.259 11:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:58.259 11:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:58.259 11:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:58.259 11:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:58.259 11:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:58.259 11:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:58.259 11:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:58.259 11:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:58.518 [2024-11-20 11:24:41.375990] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:58.519 /dev/nbd0 00:14:58.519 11:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:58.519 11:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:58.519 11:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:58.519 11:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:58.519 11:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:58.519 11:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:58.519 11:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:58.519 11:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:58.519 11:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:58.519 11:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:58.519 11:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:58.519 1+0 records in 00:14:58.519 1+0 records out 00:14:58.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267149 s, 15.3 MB/s 00:14:58.519 11:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.519 11:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:58.519 11:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.519 11:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:58.519 11:24:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:58.519 11:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:58.519 11:24:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:58.519 11:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:58.519 11:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:58.519 11:24:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:15:05.087 65536+0 records in 00:15:05.087 65536+0 records out 00:15:05.087 33554432 bytes (34 MB, 32 MiB) copied, 5.88612 s, 5.7 MB/s 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:05.087 [2024-11-20 11:24:47.570711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.087 [2024-11-20 11:24:47.589547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.087 11:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.088 11:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.088 11:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.088 11:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.088 11:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.088 11:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.088 11:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.088 11:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.088 11:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.088 11:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.088 "name": "raid_bdev1", 00:15:05.088 "uuid": "a6034a1b-5c17-44ad-9e97-7117cd0c9a17", 00:15:05.088 "strip_size_kb": 0, 00:15:05.088 "state": "online", 00:15:05.088 "raid_level": "raid1", 00:15:05.088 "superblock": false, 00:15:05.088 "num_base_bdevs": 4, 00:15:05.088 "num_base_bdevs_discovered": 3, 00:15:05.088 "num_base_bdevs_operational": 3, 00:15:05.088 "base_bdevs_list": [ 00:15:05.088 { 00:15:05.088 "name": null, 00:15:05.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.088 "is_configured": false, 00:15:05.088 "data_offset": 0, 00:15:05.088 "data_size": 65536 00:15:05.088 }, 00:15:05.088 { 00:15:05.088 "name": "BaseBdev2", 00:15:05.088 "uuid": "b162ac3a-d09a-569a-b88c-9714cf6bfe6e", 00:15:05.088 "is_configured": true, 00:15:05.088 "data_offset": 0, 00:15:05.088 "data_size": 65536 00:15:05.088 }, 00:15:05.088 { 00:15:05.088 "name": "BaseBdev3", 00:15:05.088 "uuid": "81384d12-b3b7-52b7-8c7f-bc0167d90c17", 00:15:05.088 "is_configured": true, 00:15:05.088 "data_offset": 0, 00:15:05.088 "data_size": 65536 00:15:05.088 }, 00:15:05.088 { 00:15:05.088 "name": "BaseBdev4", 00:15:05.088 "uuid": "faa2a233-ba64-57d6-b921-fd39df8d8803", 00:15:05.088 "is_configured": true, 00:15:05.088 "data_offset": 0, 00:15:05.088 "data_size": 65536 00:15:05.088 } 00:15:05.088 ] 00:15:05.088 }' 00:15:05.088 11:24:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.088 11:24:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.088 11:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:05.088 11:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.088 11:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.088 [2024-11-20 11:24:48.052741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:05.088 [2024-11-20 11:24:48.069361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:15:05.088 11:24:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.088 11:24:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:05.088 [2024-11-20 11:24:48.071325] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:06.029 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.029 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.029 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.029 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.029 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.029 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.029 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.029 11:24:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.029 11:24:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.029 11:24:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.029 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.029 "name": "raid_bdev1", 00:15:06.029 "uuid": "a6034a1b-5c17-44ad-9e97-7117cd0c9a17", 00:15:06.029 "strip_size_kb": 0, 00:15:06.029 "state": "online", 00:15:06.029 "raid_level": "raid1", 00:15:06.029 "superblock": false, 00:15:06.029 "num_base_bdevs": 4, 00:15:06.029 "num_base_bdevs_discovered": 4, 00:15:06.029 "num_base_bdevs_operational": 4, 00:15:06.029 "process": { 00:15:06.029 "type": "rebuild", 00:15:06.029 "target": "spare", 00:15:06.029 "progress": { 00:15:06.029 "blocks": 20480, 00:15:06.029 "percent": 31 00:15:06.029 } 00:15:06.029 }, 00:15:06.029 "base_bdevs_list": [ 00:15:06.029 { 00:15:06.029 "name": "spare", 00:15:06.029 "uuid": "d419e749-df23-5cc8-b6b2-44b0cdc4707d", 00:15:06.029 "is_configured": true, 00:15:06.029 "data_offset": 0, 00:15:06.029 "data_size": 65536 00:15:06.029 }, 00:15:06.029 { 00:15:06.029 "name": "BaseBdev2", 00:15:06.029 "uuid": "b162ac3a-d09a-569a-b88c-9714cf6bfe6e", 00:15:06.029 "is_configured": true, 00:15:06.029 "data_offset": 0, 00:15:06.029 "data_size": 65536 00:15:06.029 }, 00:15:06.029 { 00:15:06.029 "name": "BaseBdev3", 00:15:06.029 "uuid": "81384d12-b3b7-52b7-8c7f-bc0167d90c17", 00:15:06.029 "is_configured": true, 00:15:06.029 "data_offset": 0, 00:15:06.029 "data_size": 65536 00:15:06.029 }, 00:15:06.029 { 00:15:06.029 "name": "BaseBdev4", 00:15:06.029 "uuid": "faa2a233-ba64-57d6-b921-fd39df8d8803", 00:15:06.029 "is_configured": true, 00:15:06.029 "data_offset": 0, 00:15:06.029 "data_size": 65536 00:15:06.029 } 00:15:06.029 ] 00:15:06.029 }' 00:15:06.029 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.291 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.291 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.291 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.291 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:06.291 11:24:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.291 11:24:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.291 [2024-11-20 11:24:49.239672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:06.291 [2024-11-20 11:24:49.277549] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:06.291 [2024-11-20 11:24:49.277743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.291 [2024-11-20 11:24:49.277767] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:06.291 [2024-11-20 11:24:49.277778] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:06.291 11:24:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.291 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:06.291 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.291 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.291 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.291 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.291 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.291 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.291 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.291 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.291 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.291 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.291 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.291 11:24:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.291 11:24:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.291 11:24:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.291 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.291 "name": "raid_bdev1", 00:15:06.291 "uuid": "a6034a1b-5c17-44ad-9e97-7117cd0c9a17", 00:15:06.291 "strip_size_kb": 0, 00:15:06.291 "state": "online", 00:15:06.291 "raid_level": "raid1", 00:15:06.291 "superblock": false, 00:15:06.291 "num_base_bdevs": 4, 00:15:06.291 "num_base_bdevs_discovered": 3, 00:15:06.291 "num_base_bdevs_operational": 3, 00:15:06.291 "base_bdevs_list": [ 00:15:06.291 { 00:15:06.291 "name": null, 00:15:06.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.291 "is_configured": false, 00:15:06.291 "data_offset": 0, 00:15:06.291 "data_size": 65536 00:15:06.291 }, 00:15:06.292 { 00:15:06.292 "name": "BaseBdev2", 00:15:06.292 "uuid": "b162ac3a-d09a-569a-b88c-9714cf6bfe6e", 00:15:06.292 "is_configured": true, 00:15:06.292 "data_offset": 0, 00:15:06.292 "data_size": 65536 00:15:06.292 }, 00:15:06.292 { 00:15:06.292 "name": "BaseBdev3", 00:15:06.292 "uuid": "81384d12-b3b7-52b7-8c7f-bc0167d90c17", 00:15:06.292 "is_configured": true, 00:15:06.292 "data_offset": 0, 00:15:06.292 "data_size": 65536 00:15:06.292 }, 00:15:06.292 { 00:15:06.292 "name": "BaseBdev4", 00:15:06.292 "uuid": "faa2a233-ba64-57d6-b921-fd39df8d8803", 00:15:06.292 "is_configured": true, 00:15:06.292 "data_offset": 0, 00:15:06.292 "data_size": 65536 00:15:06.292 } 00:15:06.292 ] 00:15:06.292 }' 00:15:06.292 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.292 11:24:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.863 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:06.863 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.863 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:06.863 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:06.863 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.863 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.863 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.863 11:24:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.863 11:24:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.863 11:24:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.863 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.863 "name": "raid_bdev1", 00:15:06.863 "uuid": "a6034a1b-5c17-44ad-9e97-7117cd0c9a17", 00:15:06.863 "strip_size_kb": 0, 00:15:06.863 "state": "online", 00:15:06.863 "raid_level": "raid1", 00:15:06.863 "superblock": false, 00:15:06.863 "num_base_bdevs": 4, 00:15:06.863 "num_base_bdevs_discovered": 3, 00:15:06.863 "num_base_bdevs_operational": 3, 00:15:06.863 "base_bdevs_list": [ 00:15:06.863 { 00:15:06.863 "name": null, 00:15:06.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.863 "is_configured": false, 00:15:06.863 "data_offset": 0, 00:15:06.863 "data_size": 65536 00:15:06.863 }, 00:15:06.863 { 00:15:06.863 "name": "BaseBdev2", 00:15:06.863 "uuid": "b162ac3a-d09a-569a-b88c-9714cf6bfe6e", 00:15:06.863 "is_configured": true, 00:15:06.863 "data_offset": 0, 00:15:06.863 "data_size": 65536 00:15:06.863 }, 00:15:06.863 { 00:15:06.863 "name": "BaseBdev3", 00:15:06.863 "uuid": "81384d12-b3b7-52b7-8c7f-bc0167d90c17", 00:15:06.863 "is_configured": true, 00:15:06.863 "data_offset": 0, 00:15:06.863 "data_size": 65536 00:15:06.863 }, 00:15:06.863 { 00:15:06.863 "name": "BaseBdev4", 00:15:06.863 "uuid": "faa2a233-ba64-57d6-b921-fd39df8d8803", 00:15:06.863 "is_configured": true, 00:15:06.863 "data_offset": 0, 00:15:06.863 "data_size": 65536 00:15:06.863 } 00:15:06.863 ] 00:15:06.863 }' 00:15:06.863 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.863 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:06.863 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.863 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:06.863 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:06.863 11:24:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.863 11:24:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.863 [2024-11-20 11:24:49.918002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:06.863 [2024-11-20 11:24:49.934628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:15:06.863 11:24:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.863 11:24:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:06.863 [2024-11-20 11:24:49.936760] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:08.240 11:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.240 11:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.240 11:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.240 11:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.240 11:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.240 11:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.240 11:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.240 11:24:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.240 11:24:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.240 11:24:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.240 11:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.240 "name": "raid_bdev1", 00:15:08.240 "uuid": "a6034a1b-5c17-44ad-9e97-7117cd0c9a17", 00:15:08.240 "strip_size_kb": 0, 00:15:08.240 "state": "online", 00:15:08.240 "raid_level": "raid1", 00:15:08.240 "superblock": false, 00:15:08.240 "num_base_bdevs": 4, 00:15:08.240 "num_base_bdevs_discovered": 4, 00:15:08.240 "num_base_bdevs_operational": 4, 00:15:08.240 "process": { 00:15:08.240 "type": "rebuild", 00:15:08.240 "target": "spare", 00:15:08.240 "progress": { 00:15:08.240 "blocks": 20480, 00:15:08.240 "percent": 31 00:15:08.240 } 00:15:08.240 }, 00:15:08.240 "base_bdevs_list": [ 00:15:08.240 { 00:15:08.240 "name": "spare", 00:15:08.240 "uuid": "d419e749-df23-5cc8-b6b2-44b0cdc4707d", 00:15:08.240 "is_configured": true, 00:15:08.240 "data_offset": 0, 00:15:08.240 "data_size": 65536 00:15:08.240 }, 00:15:08.240 { 00:15:08.240 "name": "BaseBdev2", 00:15:08.240 "uuid": "b162ac3a-d09a-569a-b88c-9714cf6bfe6e", 00:15:08.240 "is_configured": true, 00:15:08.240 "data_offset": 0, 00:15:08.240 "data_size": 65536 00:15:08.240 }, 00:15:08.240 { 00:15:08.240 "name": "BaseBdev3", 00:15:08.240 "uuid": "81384d12-b3b7-52b7-8c7f-bc0167d90c17", 00:15:08.240 "is_configured": true, 00:15:08.240 "data_offset": 0, 00:15:08.240 "data_size": 65536 00:15:08.240 }, 00:15:08.240 { 00:15:08.240 "name": "BaseBdev4", 00:15:08.240 "uuid": "faa2a233-ba64-57d6-b921-fd39df8d8803", 00:15:08.240 "is_configured": true, 00:15:08.240 "data_offset": 0, 00:15:08.240 "data_size": 65536 00:15:08.240 } 00:15:08.240 ] 00:15:08.240 }' 00:15:08.240 11:24:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.240 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.241 [2024-11-20 11:24:51.088333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:08.241 [2024-11-20 11:24:51.142738] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.241 "name": "raid_bdev1", 00:15:08.241 "uuid": "a6034a1b-5c17-44ad-9e97-7117cd0c9a17", 00:15:08.241 "strip_size_kb": 0, 00:15:08.241 "state": "online", 00:15:08.241 "raid_level": "raid1", 00:15:08.241 "superblock": false, 00:15:08.241 "num_base_bdevs": 4, 00:15:08.241 "num_base_bdevs_discovered": 3, 00:15:08.241 "num_base_bdevs_operational": 3, 00:15:08.241 "process": { 00:15:08.241 "type": "rebuild", 00:15:08.241 "target": "spare", 00:15:08.241 "progress": { 00:15:08.241 "blocks": 24576, 00:15:08.241 "percent": 37 00:15:08.241 } 00:15:08.241 }, 00:15:08.241 "base_bdevs_list": [ 00:15:08.241 { 00:15:08.241 "name": "spare", 00:15:08.241 "uuid": "d419e749-df23-5cc8-b6b2-44b0cdc4707d", 00:15:08.241 "is_configured": true, 00:15:08.241 "data_offset": 0, 00:15:08.241 "data_size": 65536 00:15:08.241 }, 00:15:08.241 { 00:15:08.241 "name": null, 00:15:08.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.241 "is_configured": false, 00:15:08.241 "data_offset": 0, 00:15:08.241 "data_size": 65536 00:15:08.241 }, 00:15:08.241 { 00:15:08.241 "name": "BaseBdev3", 00:15:08.241 "uuid": "81384d12-b3b7-52b7-8c7f-bc0167d90c17", 00:15:08.241 "is_configured": true, 00:15:08.241 "data_offset": 0, 00:15:08.241 "data_size": 65536 00:15:08.241 }, 00:15:08.241 { 00:15:08.241 "name": "BaseBdev4", 00:15:08.241 "uuid": "faa2a233-ba64-57d6-b921-fd39df8d8803", 00:15:08.241 "is_configured": true, 00:15:08.241 "data_offset": 0, 00:15:08.241 "data_size": 65536 00:15:08.241 } 00:15:08.241 ] 00:15:08.241 }' 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=457 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.241 "name": "raid_bdev1", 00:15:08.241 "uuid": "a6034a1b-5c17-44ad-9e97-7117cd0c9a17", 00:15:08.241 "strip_size_kb": 0, 00:15:08.241 "state": "online", 00:15:08.241 "raid_level": "raid1", 00:15:08.241 "superblock": false, 00:15:08.241 "num_base_bdevs": 4, 00:15:08.241 "num_base_bdevs_discovered": 3, 00:15:08.241 "num_base_bdevs_operational": 3, 00:15:08.241 "process": { 00:15:08.241 "type": "rebuild", 00:15:08.241 "target": "spare", 00:15:08.241 "progress": { 00:15:08.241 "blocks": 26624, 00:15:08.241 "percent": 40 00:15:08.241 } 00:15:08.241 }, 00:15:08.241 "base_bdevs_list": [ 00:15:08.241 { 00:15:08.241 "name": "spare", 00:15:08.241 "uuid": "d419e749-df23-5cc8-b6b2-44b0cdc4707d", 00:15:08.241 "is_configured": true, 00:15:08.241 "data_offset": 0, 00:15:08.241 "data_size": 65536 00:15:08.241 }, 00:15:08.241 { 00:15:08.241 "name": null, 00:15:08.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.241 "is_configured": false, 00:15:08.241 "data_offset": 0, 00:15:08.241 "data_size": 65536 00:15:08.241 }, 00:15:08.241 { 00:15:08.241 "name": "BaseBdev3", 00:15:08.241 "uuid": "81384d12-b3b7-52b7-8c7f-bc0167d90c17", 00:15:08.241 "is_configured": true, 00:15:08.241 "data_offset": 0, 00:15:08.241 "data_size": 65536 00:15:08.241 }, 00:15:08.241 { 00:15:08.241 "name": "BaseBdev4", 00:15:08.241 "uuid": "faa2a233-ba64-57d6-b921-fd39df8d8803", 00:15:08.241 "is_configured": true, 00:15:08.241 "data_offset": 0, 00:15:08.241 "data_size": 65536 00:15:08.241 } 00:15:08.241 ] 00:15:08.241 }' 00:15:08.241 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.501 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.501 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.501 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.501 11:24:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:09.438 11:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:09.438 11:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.438 11:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.438 11:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.438 11:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.438 11:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.438 11:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.438 11:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.438 11:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.438 11:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.438 11:24:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.438 11:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.438 "name": "raid_bdev1", 00:15:09.438 "uuid": "a6034a1b-5c17-44ad-9e97-7117cd0c9a17", 00:15:09.438 "strip_size_kb": 0, 00:15:09.438 "state": "online", 00:15:09.438 "raid_level": "raid1", 00:15:09.438 "superblock": false, 00:15:09.438 "num_base_bdevs": 4, 00:15:09.438 "num_base_bdevs_discovered": 3, 00:15:09.438 "num_base_bdevs_operational": 3, 00:15:09.438 "process": { 00:15:09.438 "type": "rebuild", 00:15:09.438 "target": "spare", 00:15:09.438 "progress": { 00:15:09.438 "blocks": 51200, 00:15:09.438 "percent": 78 00:15:09.438 } 00:15:09.438 }, 00:15:09.438 "base_bdevs_list": [ 00:15:09.438 { 00:15:09.438 "name": "spare", 00:15:09.438 "uuid": "d419e749-df23-5cc8-b6b2-44b0cdc4707d", 00:15:09.438 "is_configured": true, 00:15:09.438 "data_offset": 0, 00:15:09.438 "data_size": 65536 00:15:09.438 }, 00:15:09.438 { 00:15:09.438 "name": null, 00:15:09.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.438 "is_configured": false, 00:15:09.438 "data_offset": 0, 00:15:09.438 "data_size": 65536 00:15:09.438 }, 00:15:09.438 { 00:15:09.438 "name": "BaseBdev3", 00:15:09.438 "uuid": "81384d12-b3b7-52b7-8c7f-bc0167d90c17", 00:15:09.438 "is_configured": true, 00:15:09.438 "data_offset": 0, 00:15:09.438 "data_size": 65536 00:15:09.438 }, 00:15:09.438 { 00:15:09.438 "name": "BaseBdev4", 00:15:09.438 "uuid": "faa2a233-ba64-57d6-b921-fd39df8d8803", 00:15:09.438 "is_configured": true, 00:15:09.438 "data_offset": 0, 00:15:09.438 "data_size": 65536 00:15:09.439 } 00:15:09.439 ] 00:15:09.439 }' 00:15:09.439 11:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.439 11:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.439 11:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.698 11:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.698 11:24:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:10.267 [2024-11-20 11:24:53.152731] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:10.267 [2024-11-20 11:24:53.152933] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:10.267 [2024-11-20 11:24:53.152991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.527 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:10.527 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.527 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.527 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.527 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.527 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.527 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.527 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.527 11:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.527 11:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.527 11:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.787 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.787 "name": "raid_bdev1", 00:15:10.787 "uuid": "a6034a1b-5c17-44ad-9e97-7117cd0c9a17", 00:15:10.787 "strip_size_kb": 0, 00:15:10.787 "state": "online", 00:15:10.787 "raid_level": "raid1", 00:15:10.787 "superblock": false, 00:15:10.787 "num_base_bdevs": 4, 00:15:10.787 "num_base_bdevs_discovered": 3, 00:15:10.787 "num_base_bdevs_operational": 3, 00:15:10.787 "base_bdevs_list": [ 00:15:10.787 { 00:15:10.787 "name": "spare", 00:15:10.787 "uuid": "d419e749-df23-5cc8-b6b2-44b0cdc4707d", 00:15:10.787 "is_configured": true, 00:15:10.787 "data_offset": 0, 00:15:10.787 "data_size": 65536 00:15:10.787 }, 00:15:10.787 { 00:15:10.787 "name": null, 00:15:10.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.787 "is_configured": false, 00:15:10.787 "data_offset": 0, 00:15:10.787 "data_size": 65536 00:15:10.787 }, 00:15:10.787 { 00:15:10.787 "name": "BaseBdev3", 00:15:10.788 "uuid": "81384d12-b3b7-52b7-8c7f-bc0167d90c17", 00:15:10.788 "is_configured": true, 00:15:10.788 "data_offset": 0, 00:15:10.788 "data_size": 65536 00:15:10.788 }, 00:15:10.788 { 00:15:10.788 "name": "BaseBdev4", 00:15:10.788 "uuid": "faa2a233-ba64-57d6-b921-fd39df8d8803", 00:15:10.788 "is_configured": true, 00:15:10.788 "data_offset": 0, 00:15:10.788 "data_size": 65536 00:15:10.788 } 00:15:10.788 ] 00:15:10.788 }' 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.788 "name": "raid_bdev1", 00:15:10.788 "uuid": "a6034a1b-5c17-44ad-9e97-7117cd0c9a17", 00:15:10.788 "strip_size_kb": 0, 00:15:10.788 "state": "online", 00:15:10.788 "raid_level": "raid1", 00:15:10.788 "superblock": false, 00:15:10.788 "num_base_bdevs": 4, 00:15:10.788 "num_base_bdevs_discovered": 3, 00:15:10.788 "num_base_bdevs_operational": 3, 00:15:10.788 "base_bdevs_list": [ 00:15:10.788 { 00:15:10.788 "name": "spare", 00:15:10.788 "uuid": "d419e749-df23-5cc8-b6b2-44b0cdc4707d", 00:15:10.788 "is_configured": true, 00:15:10.788 "data_offset": 0, 00:15:10.788 "data_size": 65536 00:15:10.788 }, 00:15:10.788 { 00:15:10.788 "name": null, 00:15:10.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.788 "is_configured": false, 00:15:10.788 "data_offset": 0, 00:15:10.788 "data_size": 65536 00:15:10.788 }, 00:15:10.788 { 00:15:10.788 "name": "BaseBdev3", 00:15:10.788 "uuid": "81384d12-b3b7-52b7-8c7f-bc0167d90c17", 00:15:10.788 "is_configured": true, 00:15:10.788 "data_offset": 0, 00:15:10.788 "data_size": 65536 00:15:10.788 }, 00:15:10.788 { 00:15:10.788 "name": "BaseBdev4", 00:15:10.788 "uuid": "faa2a233-ba64-57d6-b921-fd39df8d8803", 00:15:10.788 "is_configured": true, 00:15:10.788 "data_offset": 0, 00:15:10.788 "data_size": 65536 00:15:10.788 } 00:15:10.788 ] 00:15:10.788 }' 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.788 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.047 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.047 11:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.047 11:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.047 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.048 11:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.048 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.048 "name": "raid_bdev1", 00:15:11.048 "uuid": "a6034a1b-5c17-44ad-9e97-7117cd0c9a17", 00:15:11.048 "strip_size_kb": 0, 00:15:11.048 "state": "online", 00:15:11.048 "raid_level": "raid1", 00:15:11.048 "superblock": false, 00:15:11.048 "num_base_bdevs": 4, 00:15:11.048 "num_base_bdevs_discovered": 3, 00:15:11.048 "num_base_bdevs_operational": 3, 00:15:11.048 "base_bdevs_list": [ 00:15:11.048 { 00:15:11.048 "name": "spare", 00:15:11.048 "uuid": "d419e749-df23-5cc8-b6b2-44b0cdc4707d", 00:15:11.048 "is_configured": true, 00:15:11.048 "data_offset": 0, 00:15:11.048 "data_size": 65536 00:15:11.048 }, 00:15:11.048 { 00:15:11.048 "name": null, 00:15:11.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.048 "is_configured": false, 00:15:11.048 "data_offset": 0, 00:15:11.048 "data_size": 65536 00:15:11.048 }, 00:15:11.048 { 00:15:11.048 "name": "BaseBdev3", 00:15:11.048 "uuid": "81384d12-b3b7-52b7-8c7f-bc0167d90c17", 00:15:11.048 "is_configured": true, 00:15:11.048 "data_offset": 0, 00:15:11.048 "data_size": 65536 00:15:11.048 }, 00:15:11.048 { 00:15:11.048 "name": "BaseBdev4", 00:15:11.048 "uuid": "faa2a233-ba64-57d6-b921-fd39df8d8803", 00:15:11.048 "is_configured": true, 00:15:11.048 "data_offset": 0, 00:15:11.048 "data_size": 65536 00:15:11.048 } 00:15:11.048 ] 00:15:11.048 }' 00:15:11.048 11:24:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.048 11:24:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.307 11:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:11.307 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.307 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.307 [2024-11-20 11:24:54.391591] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:11.307 [2024-11-20 11:24:54.391674] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:11.307 [2024-11-20 11:24:54.391783] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:11.307 [2024-11-20 11:24:54.391905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:11.307 [2024-11-20 11:24:54.391955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:11.307 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.307 11:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.307 11:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:11.307 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.307 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.307 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.567 11:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:11.567 11:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:11.567 11:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:11.567 11:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:11.567 11:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:11.567 11:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:11.567 11:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:11.567 11:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:11.567 11:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:11.567 11:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:11.567 11:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:11.567 11:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:11.567 11:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:11.567 /dev/nbd0 00:15:11.828 11:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:11.828 11:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:11.828 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:11.828 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:11.828 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:11.828 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:11.828 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:11.828 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:11.828 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:11.828 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:11.828 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:11.828 1+0 records in 00:15:11.828 1+0 records out 00:15:11.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036449 s, 11.2 MB/s 00:15:11.828 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.828 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:11.828 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.828 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:11.828 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:11.828 11:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:11.828 11:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:11.828 11:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:11.828 /dev/nbd1 00:15:11.828 11:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:12.088 11:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:12.088 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:12.088 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:12.088 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:12.088 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:12.088 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:12.088 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:12.088 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:12.088 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:12.088 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.088 1+0 records in 00:15:12.088 1+0 records out 00:15:12.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504264 s, 8.1 MB/s 00:15:12.088 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.088 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:12.088 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.088 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:12.088 11:24:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:12.088 11:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:12.088 11:24:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:12.088 11:24:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:12.088 11:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:12.088 11:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:12.088 11:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:12.088 11:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:12.088 11:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:12.088 11:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:12.088 11:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:12.347 11:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:12.348 11:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:12.348 11:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:12.348 11:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.348 11:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.348 11:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:12.348 11:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:12.348 11:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.348 11:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:12.348 11:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:12.607 11:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:12.607 11:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:12.607 11:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:12.607 11:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.607 11:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.607 11:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:12.607 11:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:12.607 11:24:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.607 11:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:12.607 11:24:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77705 00:15:12.607 11:24:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77705 ']' 00:15:12.607 11:24:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77705 00:15:12.607 11:24:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:12.607 11:24:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.607 11:24:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77705 00:15:12.607 11:24:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:12.607 11:24:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:12.607 11:24:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77705' 00:15:12.607 killing process with pid 77705 00:15:12.607 Received shutdown signal, test time was about 60.000000 seconds 00:15:12.607 00:15:12.607 Latency(us) 00:15:12.607 [2024-11-20T11:24:55.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.607 [2024-11-20T11:24:55.723Z] =================================================================================================================== 00:15:12.607 [2024-11-20T11:24:55.723Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:12.607 11:24:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77705 00:15:12.607 [2024-11-20 11:24:55.651142] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:12.607 11:24:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77705 00:15:13.175 [2024-11-20 11:24:56.142098] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:14.556 00:15:14.556 real 0m17.930s 00:15:14.556 user 0m20.298s 00:15:14.556 sys 0m3.307s 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.556 ************************************ 00:15:14.556 END TEST raid_rebuild_test 00:15:14.556 ************************************ 00:15:14.556 11:24:57 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:15:14.556 11:24:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:14.556 11:24:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:14.556 11:24:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:14.556 ************************************ 00:15:14.556 START TEST raid_rebuild_test_sb 00:15:14.556 ************************************ 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78151 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78151 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78151 ']' 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.556 11:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.556 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:14.556 Zero copy mechanism will not be used. 00:15:14.556 [2024-11-20 11:24:57.457893] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:15:14.556 [2024-11-20 11:24:57.458090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78151 ] 00:15:14.556 [2024-11-20 11:24:57.630572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.817 [2024-11-20 11:24:57.747619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.076 [2024-11-20 11:24:57.953065] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:15.076 [2024-11-20 11:24:57.953231] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:15.336 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:15.336 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:15.336 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:15.336 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:15.336 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.336 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.336 BaseBdev1_malloc 00:15:15.336 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.336 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:15.336 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.336 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.336 [2024-11-20 11:24:58.354586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:15.336 [2024-11-20 11:24:58.354721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.336 [2024-11-20 11:24:58.354767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:15.336 [2024-11-20 11:24:58.354805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.336 [2024-11-20 11:24:58.357186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.336 [2024-11-20 11:24:58.357278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:15.336 BaseBdev1 00:15:15.336 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.336 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:15.336 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:15.336 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.336 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.336 BaseBdev2_malloc 00:15:15.337 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.337 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:15.337 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.337 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.337 [2024-11-20 11:24:58.415463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:15.337 [2024-11-20 11:24:58.415579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.337 [2024-11-20 11:24:58.415607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:15.337 [2024-11-20 11:24:58.415622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.337 [2024-11-20 11:24:58.418078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.337 [2024-11-20 11:24:58.418121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:15.337 BaseBdev2 00:15:15.337 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.337 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:15.337 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:15.337 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.337 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.596 BaseBdev3_malloc 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.596 [2024-11-20 11:24:58.483532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:15.596 [2024-11-20 11:24:58.483599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.596 [2024-11-20 11:24:58.483640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:15.596 [2024-11-20 11:24:58.483651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.596 [2024-11-20 11:24:58.485824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.596 [2024-11-20 11:24:58.485868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:15.596 BaseBdev3 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.596 BaseBdev4_malloc 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.596 [2024-11-20 11:24:58.539839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:15.596 [2024-11-20 11:24:58.539904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.596 [2024-11-20 11:24:58.539927] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:15.596 [2024-11-20 11:24:58.539939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.596 [2024-11-20 11:24:58.542275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.596 [2024-11-20 11:24:58.542324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:15.596 BaseBdev4 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.596 spare_malloc 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.596 spare_delay 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.596 [2024-11-20 11:24:58.609805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:15.596 [2024-11-20 11:24:58.609925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.596 [2024-11-20 11:24:58.609956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:15.596 [2024-11-20 11:24:58.609967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.596 [2024-11-20 11:24:58.612466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.596 [2024-11-20 11:24:58.612527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:15.596 spare 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.596 [2024-11-20 11:24:58.621852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.596 [2024-11-20 11:24:58.624039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:15.596 [2024-11-20 11:24:58.624215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:15.596 [2024-11-20 11:24:58.624297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:15.596 [2024-11-20 11:24:58.624561] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:15.596 [2024-11-20 11:24:58.624586] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:15.596 [2024-11-20 11:24:58.624925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:15.596 [2024-11-20 11:24:58.625144] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:15.596 [2024-11-20 11:24:58.625157] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:15.596 [2024-11-20 11:24:58.625349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.596 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.597 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.597 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:15.597 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.597 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.597 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.597 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.597 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.597 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.597 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.597 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.597 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.597 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.597 "name": "raid_bdev1", 00:15:15.597 "uuid": "5d9ffee8-d01f-4732-b9d9-a145a20ef0b4", 00:15:15.597 "strip_size_kb": 0, 00:15:15.597 "state": "online", 00:15:15.597 "raid_level": "raid1", 00:15:15.597 "superblock": true, 00:15:15.597 "num_base_bdevs": 4, 00:15:15.597 "num_base_bdevs_discovered": 4, 00:15:15.597 "num_base_bdevs_operational": 4, 00:15:15.597 "base_bdevs_list": [ 00:15:15.597 { 00:15:15.597 "name": "BaseBdev1", 00:15:15.597 "uuid": "209223b3-7752-5a03-951d-e581f38cc5a3", 00:15:15.597 "is_configured": true, 00:15:15.597 "data_offset": 2048, 00:15:15.597 "data_size": 63488 00:15:15.597 }, 00:15:15.597 { 00:15:15.597 "name": "BaseBdev2", 00:15:15.597 "uuid": "ec53f0c3-af71-530d-80fe-8fcda5841652", 00:15:15.597 "is_configured": true, 00:15:15.597 "data_offset": 2048, 00:15:15.597 "data_size": 63488 00:15:15.597 }, 00:15:15.597 { 00:15:15.597 "name": "BaseBdev3", 00:15:15.597 "uuid": "247eeb91-6a8c-57a6-99f9-f0201e1dd456", 00:15:15.597 "is_configured": true, 00:15:15.597 "data_offset": 2048, 00:15:15.597 "data_size": 63488 00:15:15.597 }, 00:15:15.597 { 00:15:15.597 "name": "BaseBdev4", 00:15:15.597 "uuid": "f2941bb7-3c28-5590-9e7d-95efd7d63d69", 00:15:15.597 "is_configured": true, 00:15:15.597 "data_offset": 2048, 00:15:15.597 "data_size": 63488 00:15:15.597 } 00:15:15.597 ] 00:15:15.597 }' 00:15:15.597 11:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.597 11:24:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.166 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:16.166 11:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.166 11:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.166 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:16.166 [2024-11-20 11:24:59.105456] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:16.166 11:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.166 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:16.166 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:16.167 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.167 11:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.167 11:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.167 11:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.167 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:16.167 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:16.167 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:16.167 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:16.167 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:16.167 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:16.167 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:16.167 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:16.167 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:16.167 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:16.167 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:16.167 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:16.167 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:16.167 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:16.432 [2024-11-20 11:24:59.384638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:16.432 /dev/nbd0 00:15:16.432 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:16.432 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:16.432 11:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:16.433 11:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:16.433 11:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:16.433 11:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:16.433 11:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:16.433 11:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:16.433 11:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:16.433 11:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:16.433 11:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:16.433 1+0 records in 00:15:16.433 1+0 records out 00:15:16.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512703 s, 8.0 MB/s 00:15:16.433 11:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.433 11:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:16.433 11:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.433 11:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:16.433 11:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:16.433 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:16.433 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:16.433 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:16.433 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:16.433 11:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:23.011 63488+0 records in 00:15:23.011 63488+0 records out 00:15:23.011 32505856 bytes (33 MB, 31 MiB) copied, 5.62534 s, 5.8 MB/s 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:23.011 [2024-11-20 11:25:05.302686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.011 [2024-11-20 11:25:05.334738] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.011 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.011 "name": "raid_bdev1", 00:15:23.011 "uuid": "5d9ffee8-d01f-4732-b9d9-a145a20ef0b4", 00:15:23.011 "strip_size_kb": 0, 00:15:23.011 "state": "online", 00:15:23.011 "raid_level": "raid1", 00:15:23.011 "superblock": true, 00:15:23.011 "num_base_bdevs": 4, 00:15:23.011 "num_base_bdevs_discovered": 3, 00:15:23.011 "num_base_bdevs_operational": 3, 00:15:23.011 "base_bdevs_list": [ 00:15:23.011 { 00:15:23.011 "name": null, 00:15:23.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.011 "is_configured": false, 00:15:23.011 "data_offset": 0, 00:15:23.011 "data_size": 63488 00:15:23.011 }, 00:15:23.011 { 00:15:23.011 "name": "BaseBdev2", 00:15:23.011 "uuid": "ec53f0c3-af71-530d-80fe-8fcda5841652", 00:15:23.011 "is_configured": true, 00:15:23.011 "data_offset": 2048, 00:15:23.011 "data_size": 63488 00:15:23.011 }, 00:15:23.011 { 00:15:23.011 "name": "BaseBdev3", 00:15:23.011 "uuid": "247eeb91-6a8c-57a6-99f9-f0201e1dd456", 00:15:23.011 "is_configured": true, 00:15:23.011 "data_offset": 2048, 00:15:23.011 "data_size": 63488 00:15:23.011 }, 00:15:23.011 { 00:15:23.011 "name": "BaseBdev4", 00:15:23.011 "uuid": "f2941bb7-3c28-5590-9e7d-95efd7d63d69", 00:15:23.011 "is_configured": true, 00:15:23.011 "data_offset": 2048, 00:15:23.011 "data_size": 63488 00:15:23.012 } 00:15:23.012 ] 00:15:23.012 }' 00:15:23.012 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.012 11:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.012 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:23.012 11:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.012 11:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.012 [2024-11-20 11:25:05.817952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:23.012 [2024-11-20 11:25:05.836238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:15:23.012 11:25:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.012 [2024-11-20 11:25:05.838333] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:23.012 11:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:23.946 11:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.946 11:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.946 11:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.946 11:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.946 11:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.946 11:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.946 11:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.946 11:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.946 11:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.946 11:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.946 11:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.946 "name": "raid_bdev1", 00:15:23.946 "uuid": "5d9ffee8-d01f-4732-b9d9-a145a20ef0b4", 00:15:23.946 "strip_size_kb": 0, 00:15:23.946 "state": "online", 00:15:23.946 "raid_level": "raid1", 00:15:23.946 "superblock": true, 00:15:23.946 "num_base_bdevs": 4, 00:15:23.946 "num_base_bdevs_discovered": 4, 00:15:23.946 "num_base_bdevs_operational": 4, 00:15:23.946 "process": { 00:15:23.946 "type": "rebuild", 00:15:23.946 "target": "spare", 00:15:23.946 "progress": { 00:15:23.946 "blocks": 20480, 00:15:23.946 "percent": 32 00:15:23.946 } 00:15:23.946 }, 00:15:23.946 "base_bdevs_list": [ 00:15:23.946 { 00:15:23.946 "name": "spare", 00:15:23.946 "uuid": "1fddefd6-763b-53b2-ad12-ecadb22a4dfc", 00:15:23.946 "is_configured": true, 00:15:23.946 "data_offset": 2048, 00:15:23.946 "data_size": 63488 00:15:23.946 }, 00:15:23.946 { 00:15:23.946 "name": "BaseBdev2", 00:15:23.946 "uuid": "ec53f0c3-af71-530d-80fe-8fcda5841652", 00:15:23.946 "is_configured": true, 00:15:23.946 "data_offset": 2048, 00:15:23.946 "data_size": 63488 00:15:23.946 }, 00:15:23.946 { 00:15:23.946 "name": "BaseBdev3", 00:15:23.946 "uuid": "247eeb91-6a8c-57a6-99f9-f0201e1dd456", 00:15:23.946 "is_configured": true, 00:15:23.946 "data_offset": 2048, 00:15:23.946 "data_size": 63488 00:15:23.946 }, 00:15:23.946 { 00:15:23.946 "name": "BaseBdev4", 00:15:23.946 "uuid": "f2941bb7-3c28-5590-9e7d-95efd7d63d69", 00:15:23.946 "is_configured": true, 00:15:23.946 "data_offset": 2048, 00:15:23.946 "data_size": 63488 00:15:23.946 } 00:15:23.946 ] 00:15:23.946 }' 00:15:23.946 11:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.946 11:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.946 11:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.946 11:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.946 11:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:23.946 11:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.946 11:25:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.946 [2024-11-20 11:25:06.981276] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:23.946 [2024-11-20 11:25:07.044248] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:23.946 [2024-11-20 11:25:07.044341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.946 [2024-11-20 11:25:07.044359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:23.946 [2024-11-20 11:25:07.044369] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:24.206 11:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.206 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:24.206 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.206 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.206 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.206 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.206 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.206 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.206 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.206 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.206 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.206 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.206 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.206 11:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.206 11:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.206 11:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.206 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.206 "name": "raid_bdev1", 00:15:24.206 "uuid": "5d9ffee8-d01f-4732-b9d9-a145a20ef0b4", 00:15:24.206 "strip_size_kb": 0, 00:15:24.206 "state": "online", 00:15:24.206 "raid_level": "raid1", 00:15:24.206 "superblock": true, 00:15:24.206 "num_base_bdevs": 4, 00:15:24.206 "num_base_bdevs_discovered": 3, 00:15:24.206 "num_base_bdevs_operational": 3, 00:15:24.206 "base_bdevs_list": [ 00:15:24.206 { 00:15:24.206 "name": null, 00:15:24.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.206 "is_configured": false, 00:15:24.206 "data_offset": 0, 00:15:24.206 "data_size": 63488 00:15:24.206 }, 00:15:24.206 { 00:15:24.206 "name": "BaseBdev2", 00:15:24.206 "uuid": "ec53f0c3-af71-530d-80fe-8fcda5841652", 00:15:24.206 "is_configured": true, 00:15:24.206 "data_offset": 2048, 00:15:24.206 "data_size": 63488 00:15:24.206 }, 00:15:24.206 { 00:15:24.206 "name": "BaseBdev3", 00:15:24.206 "uuid": "247eeb91-6a8c-57a6-99f9-f0201e1dd456", 00:15:24.206 "is_configured": true, 00:15:24.206 "data_offset": 2048, 00:15:24.206 "data_size": 63488 00:15:24.206 }, 00:15:24.206 { 00:15:24.206 "name": "BaseBdev4", 00:15:24.206 "uuid": "f2941bb7-3c28-5590-9e7d-95efd7d63d69", 00:15:24.206 "is_configured": true, 00:15:24.206 "data_offset": 2048, 00:15:24.206 "data_size": 63488 00:15:24.206 } 00:15:24.206 ] 00:15:24.206 }' 00:15:24.206 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.206 11:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.465 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:24.465 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.465 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:24.465 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:24.465 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.465 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.465 11:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.465 11:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.465 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.465 11:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.724 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.724 "name": "raid_bdev1", 00:15:24.724 "uuid": "5d9ffee8-d01f-4732-b9d9-a145a20ef0b4", 00:15:24.724 "strip_size_kb": 0, 00:15:24.724 "state": "online", 00:15:24.724 "raid_level": "raid1", 00:15:24.724 "superblock": true, 00:15:24.724 "num_base_bdevs": 4, 00:15:24.724 "num_base_bdevs_discovered": 3, 00:15:24.724 "num_base_bdevs_operational": 3, 00:15:24.724 "base_bdevs_list": [ 00:15:24.724 { 00:15:24.724 "name": null, 00:15:24.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.724 "is_configured": false, 00:15:24.724 "data_offset": 0, 00:15:24.724 "data_size": 63488 00:15:24.724 }, 00:15:24.724 { 00:15:24.724 "name": "BaseBdev2", 00:15:24.724 "uuid": "ec53f0c3-af71-530d-80fe-8fcda5841652", 00:15:24.724 "is_configured": true, 00:15:24.724 "data_offset": 2048, 00:15:24.724 "data_size": 63488 00:15:24.724 }, 00:15:24.724 { 00:15:24.724 "name": "BaseBdev3", 00:15:24.724 "uuid": "247eeb91-6a8c-57a6-99f9-f0201e1dd456", 00:15:24.724 "is_configured": true, 00:15:24.724 "data_offset": 2048, 00:15:24.724 "data_size": 63488 00:15:24.724 }, 00:15:24.724 { 00:15:24.724 "name": "BaseBdev4", 00:15:24.724 "uuid": "f2941bb7-3c28-5590-9e7d-95efd7d63d69", 00:15:24.724 "is_configured": true, 00:15:24.724 "data_offset": 2048, 00:15:24.724 "data_size": 63488 00:15:24.724 } 00:15:24.724 ] 00:15:24.724 }' 00:15:24.724 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.724 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:24.724 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.724 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:24.724 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:24.724 11:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.724 11:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.724 [2024-11-20 11:25:07.685768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.724 [2024-11-20 11:25:07.701602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:15:24.724 11:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.724 11:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:24.724 [2024-11-20 11:25:07.703681] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:25.659 11:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.659 11:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.659 11:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.659 11:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.659 11:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.659 11:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.659 11:25:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.659 11:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.659 11:25:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.659 11:25:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.659 11:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.659 "name": "raid_bdev1", 00:15:25.659 "uuid": "5d9ffee8-d01f-4732-b9d9-a145a20ef0b4", 00:15:25.659 "strip_size_kb": 0, 00:15:25.659 "state": "online", 00:15:25.659 "raid_level": "raid1", 00:15:25.659 "superblock": true, 00:15:25.659 "num_base_bdevs": 4, 00:15:25.659 "num_base_bdevs_discovered": 4, 00:15:25.659 "num_base_bdevs_operational": 4, 00:15:25.659 "process": { 00:15:25.659 "type": "rebuild", 00:15:25.659 "target": "spare", 00:15:25.659 "progress": { 00:15:25.659 "blocks": 20480, 00:15:25.659 "percent": 32 00:15:25.659 } 00:15:25.659 }, 00:15:25.659 "base_bdevs_list": [ 00:15:25.659 { 00:15:25.659 "name": "spare", 00:15:25.659 "uuid": "1fddefd6-763b-53b2-ad12-ecadb22a4dfc", 00:15:25.659 "is_configured": true, 00:15:25.659 "data_offset": 2048, 00:15:25.659 "data_size": 63488 00:15:25.659 }, 00:15:25.659 { 00:15:25.659 "name": "BaseBdev2", 00:15:25.659 "uuid": "ec53f0c3-af71-530d-80fe-8fcda5841652", 00:15:25.659 "is_configured": true, 00:15:25.659 "data_offset": 2048, 00:15:25.659 "data_size": 63488 00:15:25.659 }, 00:15:25.659 { 00:15:25.659 "name": "BaseBdev3", 00:15:25.659 "uuid": "247eeb91-6a8c-57a6-99f9-f0201e1dd456", 00:15:25.659 "is_configured": true, 00:15:25.659 "data_offset": 2048, 00:15:25.659 "data_size": 63488 00:15:25.659 }, 00:15:25.659 { 00:15:25.659 "name": "BaseBdev4", 00:15:25.659 "uuid": "f2941bb7-3c28-5590-9e7d-95efd7d63d69", 00:15:25.659 "is_configured": true, 00:15:25.659 "data_offset": 2048, 00:15:25.659 "data_size": 63488 00:15:25.659 } 00:15:25.659 ] 00:15:25.659 }' 00:15:25.659 11:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.917 11:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.918 11:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.918 11:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.918 11:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:25.918 11:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:25.918 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:25.918 11:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:25.918 11:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:25.918 11:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:25.918 11:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:25.918 11:25:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.918 11:25:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.918 [2024-11-20 11:25:08.871665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:25.918 [2024-11-20 11:25:09.009501] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:15:25.918 11:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.918 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:25.918 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:25.918 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.918 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.918 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.918 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.918 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.918 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.918 11:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.918 11:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.918 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.178 11:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.178 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.178 "name": "raid_bdev1", 00:15:26.178 "uuid": "5d9ffee8-d01f-4732-b9d9-a145a20ef0b4", 00:15:26.178 "strip_size_kb": 0, 00:15:26.178 "state": "online", 00:15:26.178 "raid_level": "raid1", 00:15:26.178 "superblock": true, 00:15:26.178 "num_base_bdevs": 4, 00:15:26.178 "num_base_bdevs_discovered": 3, 00:15:26.178 "num_base_bdevs_operational": 3, 00:15:26.178 "process": { 00:15:26.178 "type": "rebuild", 00:15:26.178 "target": "spare", 00:15:26.178 "progress": { 00:15:26.178 "blocks": 24576, 00:15:26.178 "percent": 38 00:15:26.178 } 00:15:26.178 }, 00:15:26.178 "base_bdevs_list": [ 00:15:26.178 { 00:15:26.179 "name": "spare", 00:15:26.179 "uuid": "1fddefd6-763b-53b2-ad12-ecadb22a4dfc", 00:15:26.179 "is_configured": true, 00:15:26.179 "data_offset": 2048, 00:15:26.179 "data_size": 63488 00:15:26.179 }, 00:15:26.179 { 00:15:26.179 "name": null, 00:15:26.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.179 "is_configured": false, 00:15:26.179 "data_offset": 0, 00:15:26.179 "data_size": 63488 00:15:26.179 }, 00:15:26.179 { 00:15:26.179 "name": "BaseBdev3", 00:15:26.179 "uuid": "247eeb91-6a8c-57a6-99f9-f0201e1dd456", 00:15:26.179 "is_configured": true, 00:15:26.179 "data_offset": 2048, 00:15:26.179 "data_size": 63488 00:15:26.179 }, 00:15:26.179 { 00:15:26.179 "name": "BaseBdev4", 00:15:26.179 "uuid": "f2941bb7-3c28-5590-9e7d-95efd7d63d69", 00:15:26.179 "is_configured": true, 00:15:26.179 "data_offset": 2048, 00:15:26.179 "data_size": 63488 00:15:26.179 } 00:15:26.179 ] 00:15:26.179 }' 00:15:26.179 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.179 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.179 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.179 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.179 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=475 00:15:26.179 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:26.179 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.179 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.179 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.179 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.179 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.179 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.179 11:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.179 11:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.179 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.179 11:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.179 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.179 "name": "raid_bdev1", 00:15:26.179 "uuid": "5d9ffee8-d01f-4732-b9d9-a145a20ef0b4", 00:15:26.179 "strip_size_kb": 0, 00:15:26.179 "state": "online", 00:15:26.179 "raid_level": "raid1", 00:15:26.179 "superblock": true, 00:15:26.179 "num_base_bdevs": 4, 00:15:26.179 "num_base_bdevs_discovered": 3, 00:15:26.179 "num_base_bdevs_operational": 3, 00:15:26.179 "process": { 00:15:26.179 "type": "rebuild", 00:15:26.179 "target": "spare", 00:15:26.179 "progress": { 00:15:26.179 "blocks": 26624, 00:15:26.179 "percent": 41 00:15:26.179 } 00:15:26.179 }, 00:15:26.179 "base_bdevs_list": [ 00:15:26.179 { 00:15:26.179 "name": "spare", 00:15:26.179 "uuid": "1fddefd6-763b-53b2-ad12-ecadb22a4dfc", 00:15:26.179 "is_configured": true, 00:15:26.179 "data_offset": 2048, 00:15:26.179 "data_size": 63488 00:15:26.179 }, 00:15:26.179 { 00:15:26.179 "name": null, 00:15:26.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.179 "is_configured": false, 00:15:26.179 "data_offset": 0, 00:15:26.179 "data_size": 63488 00:15:26.179 }, 00:15:26.179 { 00:15:26.179 "name": "BaseBdev3", 00:15:26.179 "uuid": "247eeb91-6a8c-57a6-99f9-f0201e1dd456", 00:15:26.179 "is_configured": true, 00:15:26.179 "data_offset": 2048, 00:15:26.179 "data_size": 63488 00:15:26.179 }, 00:15:26.179 { 00:15:26.179 "name": "BaseBdev4", 00:15:26.179 "uuid": "f2941bb7-3c28-5590-9e7d-95efd7d63d69", 00:15:26.179 "is_configured": true, 00:15:26.179 "data_offset": 2048, 00:15:26.179 "data_size": 63488 00:15:26.179 } 00:15:26.179 ] 00:15:26.179 }' 00:15:26.179 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.179 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.179 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.179 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.179 11:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:27.556 11:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:27.556 11:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.556 11:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.556 11:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.556 11:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.556 11:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.556 11:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.556 11:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.556 11:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.556 11:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.556 11:25:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.556 11:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.556 "name": "raid_bdev1", 00:15:27.556 "uuid": "5d9ffee8-d01f-4732-b9d9-a145a20ef0b4", 00:15:27.556 "strip_size_kb": 0, 00:15:27.556 "state": "online", 00:15:27.556 "raid_level": "raid1", 00:15:27.556 "superblock": true, 00:15:27.556 "num_base_bdevs": 4, 00:15:27.556 "num_base_bdevs_discovered": 3, 00:15:27.556 "num_base_bdevs_operational": 3, 00:15:27.556 "process": { 00:15:27.556 "type": "rebuild", 00:15:27.556 "target": "spare", 00:15:27.556 "progress": { 00:15:27.556 "blocks": 49152, 00:15:27.556 "percent": 77 00:15:27.556 } 00:15:27.556 }, 00:15:27.556 "base_bdevs_list": [ 00:15:27.556 { 00:15:27.556 "name": "spare", 00:15:27.556 "uuid": "1fddefd6-763b-53b2-ad12-ecadb22a4dfc", 00:15:27.556 "is_configured": true, 00:15:27.556 "data_offset": 2048, 00:15:27.557 "data_size": 63488 00:15:27.557 }, 00:15:27.557 { 00:15:27.557 "name": null, 00:15:27.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.557 "is_configured": false, 00:15:27.557 "data_offset": 0, 00:15:27.557 "data_size": 63488 00:15:27.557 }, 00:15:27.557 { 00:15:27.557 "name": "BaseBdev3", 00:15:27.557 "uuid": "247eeb91-6a8c-57a6-99f9-f0201e1dd456", 00:15:27.557 "is_configured": true, 00:15:27.557 "data_offset": 2048, 00:15:27.557 "data_size": 63488 00:15:27.557 }, 00:15:27.557 { 00:15:27.557 "name": "BaseBdev4", 00:15:27.557 "uuid": "f2941bb7-3c28-5590-9e7d-95efd7d63d69", 00:15:27.557 "is_configured": true, 00:15:27.557 "data_offset": 2048, 00:15:27.557 "data_size": 63488 00:15:27.557 } 00:15:27.557 ] 00:15:27.557 }' 00:15:27.557 11:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.557 11:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.557 11:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.557 11:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.557 11:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:27.815 [2024-11-20 11:25:10.918745] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:27.815 [2024-11-20 11:25:10.918947] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:27.815 [2024-11-20 11:25:10.919129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.382 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:28.382 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:28.382 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.382 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:28.382 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:28.382 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.382 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.382 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.382 11:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.383 11:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.383 11:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.383 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.383 "name": "raid_bdev1", 00:15:28.383 "uuid": "5d9ffee8-d01f-4732-b9d9-a145a20ef0b4", 00:15:28.383 "strip_size_kb": 0, 00:15:28.383 "state": "online", 00:15:28.383 "raid_level": "raid1", 00:15:28.383 "superblock": true, 00:15:28.383 "num_base_bdevs": 4, 00:15:28.383 "num_base_bdevs_discovered": 3, 00:15:28.383 "num_base_bdevs_operational": 3, 00:15:28.383 "base_bdevs_list": [ 00:15:28.383 { 00:15:28.383 "name": "spare", 00:15:28.383 "uuid": "1fddefd6-763b-53b2-ad12-ecadb22a4dfc", 00:15:28.383 "is_configured": true, 00:15:28.383 "data_offset": 2048, 00:15:28.383 "data_size": 63488 00:15:28.383 }, 00:15:28.383 { 00:15:28.383 "name": null, 00:15:28.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.383 "is_configured": false, 00:15:28.383 "data_offset": 0, 00:15:28.383 "data_size": 63488 00:15:28.383 }, 00:15:28.383 { 00:15:28.383 "name": "BaseBdev3", 00:15:28.383 "uuid": "247eeb91-6a8c-57a6-99f9-f0201e1dd456", 00:15:28.383 "is_configured": true, 00:15:28.383 "data_offset": 2048, 00:15:28.383 "data_size": 63488 00:15:28.383 }, 00:15:28.383 { 00:15:28.383 "name": "BaseBdev4", 00:15:28.383 "uuid": "f2941bb7-3c28-5590-9e7d-95efd7d63d69", 00:15:28.383 "is_configured": true, 00:15:28.383 "data_offset": 2048, 00:15:28.383 "data_size": 63488 00:15:28.383 } 00:15:28.383 ] 00:15:28.383 }' 00:15:28.383 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.642 "name": "raid_bdev1", 00:15:28.642 "uuid": "5d9ffee8-d01f-4732-b9d9-a145a20ef0b4", 00:15:28.642 "strip_size_kb": 0, 00:15:28.642 "state": "online", 00:15:28.642 "raid_level": "raid1", 00:15:28.642 "superblock": true, 00:15:28.642 "num_base_bdevs": 4, 00:15:28.642 "num_base_bdevs_discovered": 3, 00:15:28.642 "num_base_bdevs_operational": 3, 00:15:28.642 "base_bdevs_list": [ 00:15:28.642 { 00:15:28.642 "name": "spare", 00:15:28.642 "uuid": "1fddefd6-763b-53b2-ad12-ecadb22a4dfc", 00:15:28.642 "is_configured": true, 00:15:28.642 "data_offset": 2048, 00:15:28.642 "data_size": 63488 00:15:28.642 }, 00:15:28.642 { 00:15:28.642 "name": null, 00:15:28.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.642 "is_configured": false, 00:15:28.642 "data_offset": 0, 00:15:28.642 "data_size": 63488 00:15:28.642 }, 00:15:28.642 { 00:15:28.642 "name": "BaseBdev3", 00:15:28.642 "uuid": "247eeb91-6a8c-57a6-99f9-f0201e1dd456", 00:15:28.642 "is_configured": true, 00:15:28.642 "data_offset": 2048, 00:15:28.642 "data_size": 63488 00:15:28.642 }, 00:15:28.642 { 00:15:28.642 "name": "BaseBdev4", 00:15:28.642 "uuid": "f2941bb7-3c28-5590-9e7d-95efd7d63d69", 00:15:28.642 "is_configured": true, 00:15:28.642 "data_offset": 2048, 00:15:28.642 "data_size": 63488 00:15:28.642 } 00:15:28.642 ] 00:15:28.642 }' 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.642 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.643 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.643 11:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.643 11:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.643 11:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.902 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.902 "name": "raid_bdev1", 00:15:28.903 "uuid": "5d9ffee8-d01f-4732-b9d9-a145a20ef0b4", 00:15:28.903 "strip_size_kb": 0, 00:15:28.903 "state": "online", 00:15:28.903 "raid_level": "raid1", 00:15:28.903 "superblock": true, 00:15:28.903 "num_base_bdevs": 4, 00:15:28.903 "num_base_bdevs_discovered": 3, 00:15:28.903 "num_base_bdevs_operational": 3, 00:15:28.903 "base_bdevs_list": [ 00:15:28.903 { 00:15:28.903 "name": "spare", 00:15:28.903 "uuid": "1fddefd6-763b-53b2-ad12-ecadb22a4dfc", 00:15:28.903 "is_configured": true, 00:15:28.903 "data_offset": 2048, 00:15:28.903 "data_size": 63488 00:15:28.903 }, 00:15:28.903 { 00:15:28.903 "name": null, 00:15:28.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.903 "is_configured": false, 00:15:28.903 "data_offset": 0, 00:15:28.903 "data_size": 63488 00:15:28.903 }, 00:15:28.903 { 00:15:28.903 "name": "BaseBdev3", 00:15:28.903 "uuid": "247eeb91-6a8c-57a6-99f9-f0201e1dd456", 00:15:28.903 "is_configured": true, 00:15:28.903 "data_offset": 2048, 00:15:28.903 "data_size": 63488 00:15:28.903 }, 00:15:28.903 { 00:15:28.903 "name": "BaseBdev4", 00:15:28.903 "uuid": "f2941bb7-3c28-5590-9e7d-95efd7d63d69", 00:15:28.903 "is_configured": true, 00:15:28.903 "data_offset": 2048, 00:15:28.903 "data_size": 63488 00:15:28.903 } 00:15:28.903 ] 00:15:28.903 }' 00:15:28.903 11:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.903 11:25:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.162 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:29.162 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.162 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.162 [2024-11-20 11:25:12.123251] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:29.162 [2024-11-20 11:25:12.123291] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:29.162 [2024-11-20 11:25:12.123386] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.162 [2024-11-20 11:25:12.123488] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.162 [2024-11-20 11:25:12.123500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:29.162 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.162 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:29.162 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.162 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.162 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.162 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.162 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:29.162 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:29.162 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:29.162 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:29.162 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:29.162 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:29.162 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:29.162 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:29.162 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:29.162 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:29.162 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:29.162 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:29.162 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:29.422 /dev/nbd0 00:15:29.422 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:29.422 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:29.422 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:29.422 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:29.422 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:29.422 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:29.422 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:29.422 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:29.422 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:29.422 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:29.422 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.422 1+0 records in 00:15:29.422 1+0 records out 00:15:29.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247753 s, 16.5 MB/s 00:15:29.422 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.422 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:29.422 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.422 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:29.422 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:29.422 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:29.422 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:29.423 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:29.682 /dev/nbd1 00:15:29.682 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:29.682 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:29.682 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:29.682 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:29.682 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:29.682 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:29.682 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:29.682 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:29.682 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:29.682 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:29.682 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.682 1+0 records in 00:15:29.682 1+0 records out 00:15:29.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442614 s, 9.3 MB/s 00:15:29.682 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.682 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:29.682 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.682 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:29.682 11:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:29.682 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:29.682 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:29.682 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:29.941 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:29.941 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:29.941 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:29.941 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:29.941 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:29.941 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:29.941 11:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:30.200 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:30.200 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:30.200 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:30.200 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:30.200 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:30.200 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:30.200 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:30.200 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:30.201 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:30.201 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.459 [2024-11-20 11:25:13.369239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:30.459 [2024-11-20 11:25:13.369315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:30.459 [2024-11-20 11:25:13.369342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:30.459 [2024-11-20 11:25:13.369363] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:30.459 [2024-11-20 11:25:13.371762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:30.459 [2024-11-20 11:25:13.371807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:30.459 [2024-11-20 11:25:13.371935] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:30.459 [2024-11-20 11:25:13.372013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:30.459 [2024-11-20 11:25:13.372186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:30.459 [2024-11-20 11:25:13.372300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:30.459 spare 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.459 [2024-11-20 11:25:13.472218] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:30.459 [2024-11-20 11:25:13.472262] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:30.459 [2024-11-20 11:25:13.472635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:15:30.459 [2024-11-20 11:25:13.472866] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:30.459 [2024-11-20 11:25:13.472888] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:30.459 [2024-11-20 11:25:13.473126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.459 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.460 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.460 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.460 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.460 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.460 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.460 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.460 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.460 11:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.460 11:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.460 11:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.460 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.460 "name": "raid_bdev1", 00:15:30.460 "uuid": "5d9ffee8-d01f-4732-b9d9-a145a20ef0b4", 00:15:30.460 "strip_size_kb": 0, 00:15:30.460 "state": "online", 00:15:30.460 "raid_level": "raid1", 00:15:30.460 "superblock": true, 00:15:30.460 "num_base_bdevs": 4, 00:15:30.460 "num_base_bdevs_discovered": 3, 00:15:30.460 "num_base_bdevs_operational": 3, 00:15:30.460 "base_bdevs_list": [ 00:15:30.460 { 00:15:30.460 "name": "spare", 00:15:30.460 "uuid": "1fddefd6-763b-53b2-ad12-ecadb22a4dfc", 00:15:30.460 "is_configured": true, 00:15:30.460 "data_offset": 2048, 00:15:30.460 "data_size": 63488 00:15:30.460 }, 00:15:30.460 { 00:15:30.460 "name": null, 00:15:30.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.460 "is_configured": false, 00:15:30.460 "data_offset": 2048, 00:15:30.460 "data_size": 63488 00:15:30.460 }, 00:15:30.460 { 00:15:30.460 "name": "BaseBdev3", 00:15:30.460 "uuid": "247eeb91-6a8c-57a6-99f9-f0201e1dd456", 00:15:30.460 "is_configured": true, 00:15:30.460 "data_offset": 2048, 00:15:30.460 "data_size": 63488 00:15:30.460 }, 00:15:30.460 { 00:15:30.460 "name": "BaseBdev4", 00:15:30.460 "uuid": "f2941bb7-3c28-5590-9e7d-95efd7d63d69", 00:15:30.460 "is_configured": true, 00:15:30.460 "data_offset": 2048, 00:15:30.460 "data_size": 63488 00:15:30.460 } 00:15:30.460 ] 00:15:30.460 }' 00:15:30.460 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.460 11:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.028 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:31.028 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.028 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:31.028 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:31.028 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.028 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.028 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.028 11:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.028 11:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.028 11:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.028 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.028 "name": "raid_bdev1", 00:15:31.028 "uuid": "5d9ffee8-d01f-4732-b9d9-a145a20ef0b4", 00:15:31.028 "strip_size_kb": 0, 00:15:31.028 "state": "online", 00:15:31.028 "raid_level": "raid1", 00:15:31.028 "superblock": true, 00:15:31.028 "num_base_bdevs": 4, 00:15:31.028 "num_base_bdevs_discovered": 3, 00:15:31.028 "num_base_bdevs_operational": 3, 00:15:31.028 "base_bdevs_list": [ 00:15:31.028 { 00:15:31.028 "name": "spare", 00:15:31.028 "uuid": "1fddefd6-763b-53b2-ad12-ecadb22a4dfc", 00:15:31.028 "is_configured": true, 00:15:31.028 "data_offset": 2048, 00:15:31.028 "data_size": 63488 00:15:31.028 }, 00:15:31.028 { 00:15:31.028 "name": null, 00:15:31.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.028 "is_configured": false, 00:15:31.028 "data_offset": 2048, 00:15:31.028 "data_size": 63488 00:15:31.028 }, 00:15:31.028 { 00:15:31.028 "name": "BaseBdev3", 00:15:31.028 "uuid": "247eeb91-6a8c-57a6-99f9-f0201e1dd456", 00:15:31.028 "is_configured": true, 00:15:31.028 "data_offset": 2048, 00:15:31.028 "data_size": 63488 00:15:31.028 }, 00:15:31.028 { 00:15:31.028 "name": "BaseBdev4", 00:15:31.028 "uuid": "f2941bb7-3c28-5590-9e7d-95efd7d63d69", 00:15:31.028 "is_configured": true, 00:15:31.028 "data_offset": 2048, 00:15:31.028 "data_size": 63488 00:15:31.028 } 00:15:31.028 ] 00:15:31.028 }' 00:15:31.028 11:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.028 11:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:31.028 11:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.028 11:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:31.028 11:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.028 11:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.028 11:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.028 11:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:31.028 11:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.028 11:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.028 11:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:31.028 11:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.028 11:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.028 [2024-11-20 11:25:14.132049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:31.028 11:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.028 11:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:31.028 11:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.028 11:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.028 11:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.028 11:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.028 11:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:31.028 11:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.028 11:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.028 11:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.028 11:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.287 11:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.287 11:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.287 11:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.287 11:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.287 11:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.287 11:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.287 "name": "raid_bdev1", 00:15:31.287 "uuid": "5d9ffee8-d01f-4732-b9d9-a145a20ef0b4", 00:15:31.287 "strip_size_kb": 0, 00:15:31.287 "state": "online", 00:15:31.287 "raid_level": "raid1", 00:15:31.287 "superblock": true, 00:15:31.287 "num_base_bdevs": 4, 00:15:31.287 "num_base_bdevs_discovered": 2, 00:15:31.287 "num_base_bdevs_operational": 2, 00:15:31.287 "base_bdevs_list": [ 00:15:31.287 { 00:15:31.287 "name": null, 00:15:31.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.287 "is_configured": false, 00:15:31.287 "data_offset": 0, 00:15:31.287 "data_size": 63488 00:15:31.287 }, 00:15:31.287 { 00:15:31.287 "name": null, 00:15:31.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.287 "is_configured": false, 00:15:31.287 "data_offset": 2048, 00:15:31.287 "data_size": 63488 00:15:31.287 }, 00:15:31.287 { 00:15:31.287 "name": "BaseBdev3", 00:15:31.287 "uuid": "247eeb91-6a8c-57a6-99f9-f0201e1dd456", 00:15:31.287 "is_configured": true, 00:15:31.287 "data_offset": 2048, 00:15:31.287 "data_size": 63488 00:15:31.287 }, 00:15:31.287 { 00:15:31.287 "name": "BaseBdev4", 00:15:31.288 "uuid": "f2941bb7-3c28-5590-9e7d-95efd7d63d69", 00:15:31.288 "is_configured": true, 00:15:31.288 "data_offset": 2048, 00:15:31.288 "data_size": 63488 00:15:31.288 } 00:15:31.288 ] 00:15:31.288 }' 00:15:31.288 11:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.288 11:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.547 11:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:31.547 11:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.547 11:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.547 [2024-11-20 11:25:14.579652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:31.547 [2024-11-20 11:25:14.579889] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:31.547 [2024-11-20 11:25:14.579912] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:31.547 [2024-11-20 11:25:14.579957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:31.547 [2024-11-20 11:25:14.595386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:15:31.547 11:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.547 11:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:31.547 [2024-11-20 11:25:14.597381] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:32.923 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.924 "name": "raid_bdev1", 00:15:32.924 "uuid": "5d9ffee8-d01f-4732-b9d9-a145a20ef0b4", 00:15:32.924 "strip_size_kb": 0, 00:15:32.924 "state": "online", 00:15:32.924 "raid_level": "raid1", 00:15:32.924 "superblock": true, 00:15:32.924 "num_base_bdevs": 4, 00:15:32.924 "num_base_bdevs_discovered": 3, 00:15:32.924 "num_base_bdevs_operational": 3, 00:15:32.924 "process": { 00:15:32.924 "type": "rebuild", 00:15:32.924 "target": "spare", 00:15:32.924 "progress": { 00:15:32.924 "blocks": 20480, 00:15:32.924 "percent": 32 00:15:32.924 } 00:15:32.924 }, 00:15:32.924 "base_bdevs_list": [ 00:15:32.924 { 00:15:32.924 "name": "spare", 00:15:32.924 "uuid": "1fddefd6-763b-53b2-ad12-ecadb22a4dfc", 00:15:32.924 "is_configured": true, 00:15:32.924 "data_offset": 2048, 00:15:32.924 "data_size": 63488 00:15:32.924 }, 00:15:32.924 { 00:15:32.924 "name": null, 00:15:32.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.924 "is_configured": false, 00:15:32.924 "data_offset": 2048, 00:15:32.924 "data_size": 63488 00:15:32.924 }, 00:15:32.924 { 00:15:32.924 "name": "BaseBdev3", 00:15:32.924 "uuid": "247eeb91-6a8c-57a6-99f9-f0201e1dd456", 00:15:32.924 "is_configured": true, 00:15:32.924 "data_offset": 2048, 00:15:32.924 "data_size": 63488 00:15:32.924 }, 00:15:32.924 { 00:15:32.924 "name": "BaseBdev4", 00:15:32.924 "uuid": "f2941bb7-3c28-5590-9e7d-95efd7d63d69", 00:15:32.924 "is_configured": true, 00:15:32.924 "data_offset": 2048, 00:15:32.924 "data_size": 63488 00:15:32.924 } 00:15:32.924 ] 00:15:32.924 }' 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.924 [2024-11-20 11:25:15.740972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:32.924 [2024-11-20 11:25:15.803265] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:32.924 [2024-11-20 11:25:15.803352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.924 [2024-11-20 11:25:15.803372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:32.924 [2024-11-20 11:25:15.803379] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.924 "name": "raid_bdev1", 00:15:32.924 "uuid": "5d9ffee8-d01f-4732-b9d9-a145a20ef0b4", 00:15:32.924 "strip_size_kb": 0, 00:15:32.924 "state": "online", 00:15:32.924 "raid_level": "raid1", 00:15:32.924 "superblock": true, 00:15:32.924 "num_base_bdevs": 4, 00:15:32.924 "num_base_bdevs_discovered": 2, 00:15:32.924 "num_base_bdevs_operational": 2, 00:15:32.924 "base_bdevs_list": [ 00:15:32.924 { 00:15:32.924 "name": null, 00:15:32.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.924 "is_configured": false, 00:15:32.924 "data_offset": 0, 00:15:32.924 "data_size": 63488 00:15:32.924 }, 00:15:32.924 { 00:15:32.924 "name": null, 00:15:32.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.924 "is_configured": false, 00:15:32.924 "data_offset": 2048, 00:15:32.924 "data_size": 63488 00:15:32.924 }, 00:15:32.924 { 00:15:32.924 "name": "BaseBdev3", 00:15:32.924 "uuid": "247eeb91-6a8c-57a6-99f9-f0201e1dd456", 00:15:32.924 "is_configured": true, 00:15:32.924 "data_offset": 2048, 00:15:32.924 "data_size": 63488 00:15:32.924 }, 00:15:32.924 { 00:15:32.924 "name": "BaseBdev4", 00:15:32.924 "uuid": "f2941bb7-3c28-5590-9e7d-95efd7d63d69", 00:15:32.924 "is_configured": true, 00:15:32.924 "data_offset": 2048, 00:15:32.924 "data_size": 63488 00:15:32.924 } 00:15:32.924 ] 00:15:32.924 }' 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.924 11:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.185 11:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:33.185 11:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.185 11:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.185 [2024-11-20 11:25:16.297228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:33.185 [2024-11-20 11:25:16.297300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.185 [2024-11-20 11:25:16.297342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:33.185 [2024-11-20 11:25:16.297351] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.185 [2024-11-20 11:25:16.297898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.185 [2024-11-20 11:25:16.297933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:33.185 [2024-11-20 11:25:16.298042] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:33.185 [2024-11-20 11:25:16.298060] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:33.185 [2024-11-20 11:25:16.298077] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:33.185 [2024-11-20 11:25:16.298114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:33.451 [2024-11-20 11:25:16.313446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:15:33.451 spare 00:15:33.451 11:25:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.451 11:25:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:33.451 [2024-11-20 11:25:16.315465] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:34.389 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.389 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.389 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.389 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.389 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.389 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.389 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.389 11:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.389 11:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.389 11:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.389 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.389 "name": "raid_bdev1", 00:15:34.389 "uuid": "5d9ffee8-d01f-4732-b9d9-a145a20ef0b4", 00:15:34.389 "strip_size_kb": 0, 00:15:34.389 "state": "online", 00:15:34.389 "raid_level": "raid1", 00:15:34.389 "superblock": true, 00:15:34.389 "num_base_bdevs": 4, 00:15:34.389 "num_base_bdevs_discovered": 3, 00:15:34.389 "num_base_bdevs_operational": 3, 00:15:34.389 "process": { 00:15:34.389 "type": "rebuild", 00:15:34.389 "target": "spare", 00:15:34.389 "progress": { 00:15:34.389 "blocks": 20480, 00:15:34.389 "percent": 32 00:15:34.389 } 00:15:34.389 }, 00:15:34.389 "base_bdevs_list": [ 00:15:34.389 { 00:15:34.389 "name": "spare", 00:15:34.389 "uuid": "1fddefd6-763b-53b2-ad12-ecadb22a4dfc", 00:15:34.389 "is_configured": true, 00:15:34.389 "data_offset": 2048, 00:15:34.389 "data_size": 63488 00:15:34.389 }, 00:15:34.389 { 00:15:34.389 "name": null, 00:15:34.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.389 "is_configured": false, 00:15:34.389 "data_offset": 2048, 00:15:34.389 "data_size": 63488 00:15:34.389 }, 00:15:34.389 { 00:15:34.389 "name": "BaseBdev3", 00:15:34.389 "uuid": "247eeb91-6a8c-57a6-99f9-f0201e1dd456", 00:15:34.389 "is_configured": true, 00:15:34.389 "data_offset": 2048, 00:15:34.389 "data_size": 63488 00:15:34.389 }, 00:15:34.389 { 00:15:34.389 "name": "BaseBdev4", 00:15:34.389 "uuid": "f2941bb7-3c28-5590-9e7d-95efd7d63d69", 00:15:34.389 "is_configured": true, 00:15:34.389 "data_offset": 2048, 00:15:34.389 "data_size": 63488 00:15:34.389 } 00:15:34.389 ] 00:15:34.389 }' 00:15:34.389 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.389 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.389 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.389 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.389 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:34.389 11:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.389 11:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.389 [2024-11-20 11:25:17.471688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:34.648 [2024-11-20 11:25:17.521595] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:34.648 [2024-11-20 11:25:17.521684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.648 [2024-11-20 11:25:17.521716] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:34.648 [2024-11-20 11:25:17.521726] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:34.648 11:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.648 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:34.648 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.648 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.648 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.649 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.649 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:34.649 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.649 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.649 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.649 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.649 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.649 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.649 11:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.649 11:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.649 11:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.649 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.649 "name": "raid_bdev1", 00:15:34.649 "uuid": "5d9ffee8-d01f-4732-b9d9-a145a20ef0b4", 00:15:34.649 "strip_size_kb": 0, 00:15:34.649 "state": "online", 00:15:34.649 "raid_level": "raid1", 00:15:34.649 "superblock": true, 00:15:34.649 "num_base_bdevs": 4, 00:15:34.649 "num_base_bdevs_discovered": 2, 00:15:34.649 "num_base_bdevs_operational": 2, 00:15:34.649 "base_bdevs_list": [ 00:15:34.649 { 00:15:34.649 "name": null, 00:15:34.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.649 "is_configured": false, 00:15:34.649 "data_offset": 0, 00:15:34.649 "data_size": 63488 00:15:34.649 }, 00:15:34.649 { 00:15:34.649 "name": null, 00:15:34.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.649 "is_configured": false, 00:15:34.649 "data_offset": 2048, 00:15:34.649 "data_size": 63488 00:15:34.649 }, 00:15:34.649 { 00:15:34.649 "name": "BaseBdev3", 00:15:34.649 "uuid": "247eeb91-6a8c-57a6-99f9-f0201e1dd456", 00:15:34.649 "is_configured": true, 00:15:34.649 "data_offset": 2048, 00:15:34.649 "data_size": 63488 00:15:34.649 }, 00:15:34.649 { 00:15:34.649 "name": "BaseBdev4", 00:15:34.649 "uuid": "f2941bb7-3c28-5590-9e7d-95efd7d63d69", 00:15:34.649 "is_configured": true, 00:15:34.649 "data_offset": 2048, 00:15:34.649 "data_size": 63488 00:15:34.649 } 00:15:34.649 ] 00:15:34.649 }' 00:15:34.649 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.649 11:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.908 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:34.908 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.908 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:34.908 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:34.908 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.908 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.908 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.908 11:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.908 11:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.908 11:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.908 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.908 "name": "raid_bdev1", 00:15:34.908 "uuid": "5d9ffee8-d01f-4732-b9d9-a145a20ef0b4", 00:15:34.908 "strip_size_kb": 0, 00:15:34.908 "state": "online", 00:15:34.908 "raid_level": "raid1", 00:15:34.908 "superblock": true, 00:15:34.908 "num_base_bdevs": 4, 00:15:34.908 "num_base_bdevs_discovered": 2, 00:15:34.908 "num_base_bdevs_operational": 2, 00:15:34.908 "base_bdevs_list": [ 00:15:34.908 { 00:15:34.908 "name": null, 00:15:34.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.908 "is_configured": false, 00:15:34.908 "data_offset": 0, 00:15:34.908 "data_size": 63488 00:15:34.908 }, 00:15:34.908 { 00:15:34.908 "name": null, 00:15:34.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.908 "is_configured": false, 00:15:34.908 "data_offset": 2048, 00:15:34.908 "data_size": 63488 00:15:34.908 }, 00:15:34.908 { 00:15:34.908 "name": "BaseBdev3", 00:15:34.908 "uuid": "247eeb91-6a8c-57a6-99f9-f0201e1dd456", 00:15:34.908 "is_configured": true, 00:15:34.908 "data_offset": 2048, 00:15:34.908 "data_size": 63488 00:15:34.908 }, 00:15:34.908 { 00:15:34.908 "name": "BaseBdev4", 00:15:34.908 "uuid": "f2941bb7-3c28-5590-9e7d-95efd7d63d69", 00:15:34.908 "is_configured": true, 00:15:34.908 "data_offset": 2048, 00:15:34.908 "data_size": 63488 00:15:34.908 } 00:15:34.908 ] 00:15:34.908 }' 00:15:34.908 11:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.908 11:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:34.908 11:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.167 11:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:35.167 11:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:35.167 11:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.167 11:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.167 11:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.167 11:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:35.167 11:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.167 11:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.167 [2024-11-20 11:25:18.075607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:35.167 [2024-11-20 11:25:18.075672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.167 [2024-11-20 11:25:18.075692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:35.167 [2024-11-20 11:25:18.075704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.167 [2024-11-20 11:25:18.076214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.167 [2024-11-20 11:25:18.076246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:35.167 [2024-11-20 11:25:18.076338] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:35.167 [2024-11-20 11:25:18.076362] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:35.167 [2024-11-20 11:25:18.076371] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:35.167 [2024-11-20 11:25:18.076409] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:35.168 BaseBdev1 00:15:35.168 11:25:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.168 11:25:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:36.101 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:36.101 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.101 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.101 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.101 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.101 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:36.101 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.101 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.101 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.101 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.101 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.101 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.101 11:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.101 11:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.101 11:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.101 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.101 "name": "raid_bdev1", 00:15:36.101 "uuid": "5d9ffee8-d01f-4732-b9d9-a145a20ef0b4", 00:15:36.101 "strip_size_kb": 0, 00:15:36.101 "state": "online", 00:15:36.101 "raid_level": "raid1", 00:15:36.101 "superblock": true, 00:15:36.101 "num_base_bdevs": 4, 00:15:36.101 "num_base_bdevs_discovered": 2, 00:15:36.101 "num_base_bdevs_operational": 2, 00:15:36.101 "base_bdevs_list": [ 00:15:36.101 { 00:15:36.101 "name": null, 00:15:36.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.101 "is_configured": false, 00:15:36.101 "data_offset": 0, 00:15:36.101 "data_size": 63488 00:15:36.101 }, 00:15:36.101 { 00:15:36.101 "name": null, 00:15:36.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.101 "is_configured": false, 00:15:36.101 "data_offset": 2048, 00:15:36.101 "data_size": 63488 00:15:36.101 }, 00:15:36.101 { 00:15:36.101 "name": "BaseBdev3", 00:15:36.101 "uuid": "247eeb91-6a8c-57a6-99f9-f0201e1dd456", 00:15:36.101 "is_configured": true, 00:15:36.101 "data_offset": 2048, 00:15:36.101 "data_size": 63488 00:15:36.101 }, 00:15:36.101 { 00:15:36.101 "name": "BaseBdev4", 00:15:36.101 "uuid": "f2941bb7-3c28-5590-9e7d-95efd7d63d69", 00:15:36.101 "is_configured": true, 00:15:36.101 "data_offset": 2048, 00:15:36.101 "data_size": 63488 00:15:36.101 } 00:15:36.101 ] 00:15:36.101 }' 00:15:36.101 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.101 11:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.667 "name": "raid_bdev1", 00:15:36.667 "uuid": "5d9ffee8-d01f-4732-b9d9-a145a20ef0b4", 00:15:36.667 "strip_size_kb": 0, 00:15:36.667 "state": "online", 00:15:36.667 "raid_level": "raid1", 00:15:36.667 "superblock": true, 00:15:36.667 "num_base_bdevs": 4, 00:15:36.667 "num_base_bdevs_discovered": 2, 00:15:36.667 "num_base_bdevs_operational": 2, 00:15:36.667 "base_bdevs_list": [ 00:15:36.667 { 00:15:36.667 "name": null, 00:15:36.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.667 "is_configured": false, 00:15:36.667 "data_offset": 0, 00:15:36.667 "data_size": 63488 00:15:36.667 }, 00:15:36.667 { 00:15:36.667 "name": null, 00:15:36.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.667 "is_configured": false, 00:15:36.667 "data_offset": 2048, 00:15:36.667 "data_size": 63488 00:15:36.667 }, 00:15:36.667 { 00:15:36.667 "name": "BaseBdev3", 00:15:36.667 "uuid": "247eeb91-6a8c-57a6-99f9-f0201e1dd456", 00:15:36.667 "is_configured": true, 00:15:36.667 "data_offset": 2048, 00:15:36.667 "data_size": 63488 00:15:36.667 }, 00:15:36.667 { 00:15:36.667 "name": "BaseBdev4", 00:15:36.667 "uuid": "f2941bb7-3c28-5590-9e7d-95efd7d63d69", 00:15:36.667 "is_configured": true, 00:15:36.667 "data_offset": 2048, 00:15:36.667 "data_size": 63488 00:15:36.667 } 00:15:36.667 ] 00:15:36.667 }' 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.667 [2024-11-20 11:25:19.685827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:36.667 [2024-11-20 11:25:19.686033] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:36.667 [2024-11-20 11:25:19.686051] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:36.667 request: 00:15:36.667 { 00:15:36.667 "base_bdev": "BaseBdev1", 00:15:36.667 "raid_bdev": "raid_bdev1", 00:15:36.667 "method": "bdev_raid_add_base_bdev", 00:15:36.667 "req_id": 1 00:15:36.667 } 00:15:36.667 Got JSON-RPC error response 00:15:36.667 response: 00:15:36.667 { 00:15:36.667 "code": -22, 00:15:36.667 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:36.667 } 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:36.667 11:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:36.668 11:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:36.668 11:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:36.668 11:25:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:36.668 11:25:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:37.616 11:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:37.617 11:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.617 11:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.617 11:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.617 11:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.617 11:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:37.617 11:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.617 11:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.617 11:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.617 11:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.617 11:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.617 11:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.617 11:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.617 11:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.617 11:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.874 11:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.874 "name": "raid_bdev1", 00:15:37.874 "uuid": "5d9ffee8-d01f-4732-b9d9-a145a20ef0b4", 00:15:37.874 "strip_size_kb": 0, 00:15:37.875 "state": "online", 00:15:37.875 "raid_level": "raid1", 00:15:37.875 "superblock": true, 00:15:37.875 "num_base_bdevs": 4, 00:15:37.875 "num_base_bdevs_discovered": 2, 00:15:37.875 "num_base_bdevs_operational": 2, 00:15:37.875 "base_bdevs_list": [ 00:15:37.875 { 00:15:37.875 "name": null, 00:15:37.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.875 "is_configured": false, 00:15:37.875 "data_offset": 0, 00:15:37.875 "data_size": 63488 00:15:37.875 }, 00:15:37.875 { 00:15:37.875 "name": null, 00:15:37.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.875 "is_configured": false, 00:15:37.875 "data_offset": 2048, 00:15:37.875 "data_size": 63488 00:15:37.875 }, 00:15:37.875 { 00:15:37.875 "name": "BaseBdev3", 00:15:37.875 "uuid": "247eeb91-6a8c-57a6-99f9-f0201e1dd456", 00:15:37.875 "is_configured": true, 00:15:37.875 "data_offset": 2048, 00:15:37.875 "data_size": 63488 00:15:37.875 }, 00:15:37.875 { 00:15:37.875 "name": "BaseBdev4", 00:15:37.875 "uuid": "f2941bb7-3c28-5590-9e7d-95efd7d63d69", 00:15:37.875 "is_configured": true, 00:15:37.875 "data_offset": 2048, 00:15:37.875 "data_size": 63488 00:15:37.875 } 00:15:37.875 ] 00:15:37.875 }' 00:15:37.875 11:25:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.875 11:25:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.132 11:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:38.132 11:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.133 11:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:38.133 11:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:38.133 11:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.133 11:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.133 11:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.133 11:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.133 11:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.133 11:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.133 11:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.133 "name": "raid_bdev1", 00:15:38.133 "uuid": "5d9ffee8-d01f-4732-b9d9-a145a20ef0b4", 00:15:38.133 "strip_size_kb": 0, 00:15:38.133 "state": "online", 00:15:38.133 "raid_level": "raid1", 00:15:38.133 "superblock": true, 00:15:38.133 "num_base_bdevs": 4, 00:15:38.133 "num_base_bdevs_discovered": 2, 00:15:38.133 "num_base_bdevs_operational": 2, 00:15:38.133 "base_bdevs_list": [ 00:15:38.133 { 00:15:38.133 "name": null, 00:15:38.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.133 "is_configured": false, 00:15:38.133 "data_offset": 0, 00:15:38.133 "data_size": 63488 00:15:38.133 }, 00:15:38.133 { 00:15:38.133 "name": null, 00:15:38.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.133 "is_configured": false, 00:15:38.133 "data_offset": 2048, 00:15:38.133 "data_size": 63488 00:15:38.133 }, 00:15:38.133 { 00:15:38.133 "name": "BaseBdev3", 00:15:38.133 "uuid": "247eeb91-6a8c-57a6-99f9-f0201e1dd456", 00:15:38.133 "is_configured": true, 00:15:38.133 "data_offset": 2048, 00:15:38.133 "data_size": 63488 00:15:38.133 }, 00:15:38.133 { 00:15:38.133 "name": "BaseBdev4", 00:15:38.133 "uuid": "f2941bb7-3c28-5590-9e7d-95efd7d63d69", 00:15:38.133 "is_configured": true, 00:15:38.133 "data_offset": 2048, 00:15:38.133 "data_size": 63488 00:15:38.133 } 00:15:38.133 ] 00:15:38.133 }' 00:15:38.133 11:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.133 11:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:38.133 11:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.133 11:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:38.133 11:25:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78151 00:15:38.133 11:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78151 ']' 00:15:38.133 11:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78151 00:15:38.411 11:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:38.411 11:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:38.411 11:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78151 00:15:38.411 11:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:38.411 11:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:38.411 killing process with pid 78151 00:15:38.411 11:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78151' 00:15:38.411 11:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78151 00:15:38.411 Received shutdown signal, test time was about 60.000000 seconds 00:15:38.411 00:15:38.411 Latency(us) 00:15:38.411 [2024-11-20T11:25:21.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.411 [2024-11-20T11:25:21.527Z] =================================================================================================================== 00:15:38.411 [2024-11-20T11:25:21.527Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:38.411 [2024-11-20 11:25:21.285197] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:38.411 [2024-11-20 11:25:21.285341] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:38.411 [2024-11-20 11:25:21.285436] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:38.411 [2024-11-20 11:25:21.285448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:38.411 11:25:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78151 00:15:38.994 [2024-11-20 11:25:21.801863] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:39.929 11:25:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:39.929 00:15:39.929 real 0m25.685s 00:15:39.929 user 0m30.858s 00:15:39.929 sys 0m3.820s 00:15:39.929 11:25:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.929 11:25:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.929 ************************************ 00:15:39.929 END TEST raid_rebuild_test_sb 00:15:39.929 ************************************ 00:15:40.188 11:25:23 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:15:40.188 11:25:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:40.188 11:25:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.188 11:25:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:40.188 ************************************ 00:15:40.188 START TEST raid_rebuild_test_io 00:15:40.188 ************************************ 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78910 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78910 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78910 ']' 00:15:40.188 11:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.189 11:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:40.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.189 11:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.189 11:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:40.189 11:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.189 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:40.189 Zero copy mechanism will not be used. 00:15:40.189 [2024-11-20 11:25:23.217143] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:15:40.189 [2024-11-20 11:25:23.217270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78910 ] 00:15:40.447 [2024-11-20 11:25:23.397177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.447 [2024-11-20 11:25:23.517711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.706 [2024-11-20 11:25:23.743425] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:40.706 [2024-11-20 11:25:23.743520] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.274 BaseBdev1_malloc 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.274 [2024-11-20 11:25:24.146405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:41.274 [2024-11-20 11:25:24.146504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.274 [2024-11-20 11:25:24.146544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:41.274 [2024-11-20 11:25:24.146561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.274 [2024-11-20 11:25:24.148954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.274 [2024-11-20 11:25:24.148998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:41.274 BaseBdev1 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.274 BaseBdev2_malloc 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.274 [2024-11-20 11:25:24.201676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:41.274 [2024-11-20 11:25:24.201765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.274 [2024-11-20 11:25:24.201799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:41.274 [2024-11-20 11:25:24.201818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.274 [2024-11-20 11:25:24.204369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.274 [2024-11-20 11:25:24.204428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:41.274 BaseBdev2 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.274 BaseBdev3_malloc 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.274 [2024-11-20 11:25:24.272689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:41.274 [2024-11-20 11:25:24.272765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.274 [2024-11-20 11:25:24.272796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:41.274 [2024-11-20 11:25:24.272810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.274 [2024-11-20 11:25:24.275166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.274 [2024-11-20 11:25:24.275212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:41.274 BaseBdev3 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.274 BaseBdev4_malloc 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.274 [2024-11-20 11:25:24.329487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:41.274 [2024-11-20 11:25:24.329559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.274 [2024-11-20 11:25:24.329591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:41.274 [2024-11-20 11:25:24.329608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.274 [2024-11-20 11:25:24.332681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.274 [2024-11-20 11:25:24.332745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:41.274 BaseBdev4 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.274 spare_malloc 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.274 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.534 spare_delay 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.534 [2024-11-20 11:25:24.399537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:41.534 [2024-11-20 11:25:24.399619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.534 [2024-11-20 11:25:24.399648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:41.534 [2024-11-20 11:25:24.399661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.534 [2024-11-20 11:25:24.402148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.534 [2024-11-20 11:25:24.402192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:41.534 spare 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.534 [2024-11-20 11:25:24.411507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:41.534 [2024-11-20 11:25:24.413625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:41.534 [2024-11-20 11:25:24.413707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:41.534 [2024-11-20 11:25:24.413769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:41.534 [2024-11-20 11:25:24.413866] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:41.534 [2024-11-20 11:25:24.413881] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:41.534 [2024-11-20 11:25:24.414221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:41.534 [2024-11-20 11:25:24.414438] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:41.534 [2024-11-20 11:25:24.414477] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:41.534 [2024-11-20 11:25:24.414677] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.534 "name": "raid_bdev1", 00:15:41.534 "uuid": "7fec2b7f-c321-454f-911b-996144ec96df", 00:15:41.534 "strip_size_kb": 0, 00:15:41.534 "state": "online", 00:15:41.534 "raid_level": "raid1", 00:15:41.534 "superblock": false, 00:15:41.534 "num_base_bdevs": 4, 00:15:41.534 "num_base_bdevs_discovered": 4, 00:15:41.534 "num_base_bdevs_operational": 4, 00:15:41.534 "base_bdevs_list": [ 00:15:41.534 { 00:15:41.534 "name": "BaseBdev1", 00:15:41.534 "uuid": "77cc34af-d5d2-5d44-87a2-9970f7156977", 00:15:41.534 "is_configured": true, 00:15:41.534 "data_offset": 0, 00:15:41.534 "data_size": 65536 00:15:41.534 }, 00:15:41.534 { 00:15:41.534 "name": "BaseBdev2", 00:15:41.534 "uuid": "e7f25b93-ef41-59c4-ba52-b79c93dfa907", 00:15:41.534 "is_configured": true, 00:15:41.534 "data_offset": 0, 00:15:41.534 "data_size": 65536 00:15:41.534 }, 00:15:41.534 { 00:15:41.534 "name": "BaseBdev3", 00:15:41.534 "uuid": "49192267-4a18-5bac-aa2c-b2009f9107e7", 00:15:41.534 "is_configured": true, 00:15:41.534 "data_offset": 0, 00:15:41.534 "data_size": 65536 00:15:41.534 }, 00:15:41.534 { 00:15:41.534 "name": "BaseBdev4", 00:15:41.534 "uuid": "1bad4b7b-ddcd-5ca1-a21c-dce8ed9ef5e6", 00:15:41.534 "is_configured": true, 00:15:41.534 "data_offset": 0, 00:15:41.534 "data_size": 65536 00:15:41.534 } 00:15:41.534 ] 00:15:41.534 }' 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.534 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.793 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:41.793 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:41.793 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.793 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.793 [2024-11-20 11:25:24.811184] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:41.793 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.793 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:41.793 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:41.793 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.794 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.794 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.794 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.794 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:41.794 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:41.794 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:41.794 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:41.794 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.794 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.794 [2024-11-20 11:25:24.886662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:41.794 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.794 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:41.794 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.794 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.794 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.794 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.794 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.794 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.794 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.794 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.794 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.794 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.794 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.794 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.794 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.053 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.053 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.053 "name": "raid_bdev1", 00:15:42.053 "uuid": "7fec2b7f-c321-454f-911b-996144ec96df", 00:15:42.053 "strip_size_kb": 0, 00:15:42.053 "state": "online", 00:15:42.053 "raid_level": "raid1", 00:15:42.053 "superblock": false, 00:15:42.053 "num_base_bdevs": 4, 00:15:42.053 "num_base_bdevs_discovered": 3, 00:15:42.053 "num_base_bdevs_operational": 3, 00:15:42.053 "base_bdevs_list": [ 00:15:42.053 { 00:15:42.053 "name": null, 00:15:42.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.053 "is_configured": false, 00:15:42.053 "data_offset": 0, 00:15:42.053 "data_size": 65536 00:15:42.053 }, 00:15:42.053 { 00:15:42.053 "name": "BaseBdev2", 00:15:42.053 "uuid": "e7f25b93-ef41-59c4-ba52-b79c93dfa907", 00:15:42.053 "is_configured": true, 00:15:42.053 "data_offset": 0, 00:15:42.053 "data_size": 65536 00:15:42.053 }, 00:15:42.053 { 00:15:42.053 "name": "BaseBdev3", 00:15:42.053 "uuid": "49192267-4a18-5bac-aa2c-b2009f9107e7", 00:15:42.053 "is_configured": true, 00:15:42.053 "data_offset": 0, 00:15:42.053 "data_size": 65536 00:15:42.053 }, 00:15:42.053 { 00:15:42.053 "name": "BaseBdev4", 00:15:42.053 "uuid": "1bad4b7b-ddcd-5ca1-a21c-dce8ed9ef5e6", 00:15:42.053 "is_configured": true, 00:15:42.053 "data_offset": 0, 00:15:42.053 "data_size": 65536 00:15:42.053 } 00:15:42.053 ] 00:15:42.053 }' 00:15:42.053 11:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.053 11:25:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.053 [2024-11-20 11:25:24.976413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:42.053 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:42.053 Zero copy mechanism will not be used. 00:15:42.053 Running I/O for 60 seconds... 00:15:42.312 11:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:42.312 11:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.312 11:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.312 [2024-11-20 11:25:25.375090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:42.570 11:25:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.570 11:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:42.570 [2024-11-20 11:25:25.446386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:42.570 [2024-11-20 11:25:25.448567] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:42.570 [2024-11-20 11:25:25.552116] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:42.570 [2024-11-20 11:25:25.553756] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:42.828 [2024-11-20 11:25:25.761932] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:42.828 [2024-11-20 11:25:25.762284] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:43.400 126.00 IOPS, 378.00 MiB/s [2024-11-20T11:25:26.516Z] [2024-11-20 11:25:26.240630] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:43.400 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.400 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.400 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.400 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.400 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.400 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.400 11:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.400 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.400 11:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.400 11:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.400 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.400 "name": "raid_bdev1", 00:15:43.400 "uuid": "7fec2b7f-c321-454f-911b-996144ec96df", 00:15:43.400 "strip_size_kb": 0, 00:15:43.400 "state": "online", 00:15:43.400 "raid_level": "raid1", 00:15:43.400 "superblock": false, 00:15:43.400 "num_base_bdevs": 4, 00:15:43.400 "num_base_bdevs_discovered": 4, 00:15:43.400 "num_base_bdevs_operational": 4, 00:15:43.400 "process": { 00:15:43.400 "type": "rebuild", 00:15:43.400 "target": "spare", 00:15:43.400 "progress": { 00:15:43.400 "blocks": 10240, 00:15:43.400 "percent": 15 00:15:43.400 } 00:15:43.400 }, 00:15:43.400 "base_bdevs_list": [ 00:15:43.400 { 00:15:43.400 "name": "spare", 00:15:43.400 "uuid": "e28edc1b-5a84-5054-99e1-3ba1c1a5a923", 00:15:43.400 "is_configured": true, 00:15:43.400 "data_offset": 0, 00:15:43.400 "data_size": 65536 00:15:43.400 }, 00:15:43.400 { 00:15:43.400 "name": "BaseBdev2", 00:15:43.400 "uuid": "e7f25b93-ef41-59c4-ba52-b79c93dfa907", 00:15:43.400 "is_configured": true, 00:15:43.400 "data_offset": 0, 00:15:43.400 "data_size": 65536 00:15:43.400 }, 00:15:43.400 { 00:15:43.400 "name": "BaseBdev3", 00:15:43.400 "uuid": "49192267-4a18-5bac-aa2c-b2009f9107e7", 00:15:43.400 "is_configured": true, 00:15:43.400 "data_offset": 0, 00:15:43.400 "data_size": 65536 00:15:43.400 }, 00:15:43.400 { 00:15:43.400 "name": "BaseBdev4", 00:15:43.400 "uuid": "1bad4b7b-ddcd-5ca1-a21c-dce8ed9ef5e6", 00:15:43.400 "is_configured": true, 00:15:43.400 "data_offset": 0, 00:15:43.400 "data_size": 65536 00:15:43.400 } 00:15:43.400 ] 00:15:43.400 }' 00:15:43.400 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.659 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.659 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.659 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.659 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:43.659 11:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.659 11:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.659 [2024-11-20 11:25:26.595783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:43.659 [2024-11-20 11:25:26.618834] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:43.659 [2024-11-20 11:25:26.721118] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:43.659 [2024-11-20 11:25:26.740639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.659 [2024-11-20 11:25:26.740726] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:43.659 [2024-11-20 11:25:26.740746] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:43.659 [2024-11-20 11:25:26.760858] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:43.918 11:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.918 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:43.918 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.918 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.918 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.918 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.918 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.918 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.918 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.918 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.918 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.918 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.918 11:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.918 11:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.918 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.918 11:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.918 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.918 "name": "raid_bdev1", 00:15:43.918 "uuid": "7fec2b7f-c321-454f-911b-996144ec96df", 00:15:43.918 "strip_size_kb": 0, 00:15:43.918 "state": "online", 00:15:43.918 "raid_level": "raid1", 00:15:43.918 "superblock": false, 00:15:43.918 "num_base_bdevs": 4, 00:15:43.918 "num_base_bdevs_discovered": 3, 00:15:43.918 "num_base_bdevs_operational": 3, 00:15:43.918 "base_bdevs_list": [ 00:15:43.918 { 00:15:43.918 "name": null, 00:15:43.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.918 "is_configured": false, 00:15:43.918 "data_offset": 0, 00:15:43.918 "data_size": 65536 00:15:43.918 }, 00:15:43.918 { 00:15:43.918 "name": "BaseBdev2", 00:15:43.918 "uuid": "e7f25b93-ef41-59c4-ba52-b79c93dfa907", 00:15:43.918 "is_configured": true, 00:15:43.918 "data_offset": 0, 00:15:43.918 "data_size": 65536 00:15:43.918 }, 00:15:43.918 { 00:15:43.918 "name": "BaseBdev3", 00:15:43.918 "uuid": "49192267-4a18-5bac-aa2c-b2009f9107e7", 00:15:43.918 "is_configured": true, 00:15:43.918 "data_offset": 0, 00:15:43.918 "data_size": 65536 00:15:43.918 }, 00:15:43.918 { 00:15:43.918 "name": "BaseBdev4", 00:15:43.918 "uuid": "1bad4b7b-ddcd-5ca1-a21c-dce8ed9ef5e6", 00:15:43.918 "is_configured": true, 00:15:43.918 "data_offset": 0, 00:15:43.918 "data_size": 65536 00:15:43.918 } 00:15:43.918 ] 00:15:43.918 }' 00:15:43.918 11:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.918 11:25:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.177 116.50 IOPS, 349.50 MiB/s [2024-11-20T11:25:27.293Z] 11:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:44.177 11:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.177 11:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:44.177 11:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:44.177 11:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.177 11:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.177 11:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.177 11:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.177 11:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.436 11:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.436 11:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.436 "name": "raid_bdev1", 00:15:44.436 "uuid": "7fec2b7f-c321-454f-911b-996144ec96df", 00:15:44.436 "strip_size_kb": 0, 00:15:44.436 "state": "online", 00:15:44.436 "raid_level": "raid1", 00:15:44.436 "superblock": false, 00:15:44.436 "num_base_bdevs": 4, 00:15:44.436 "num_base_bdevs_discovered": 3, 00:15:44.436 "num_base_bdevs_operational": 3, 00:15:44.436 "base_bdevs_list": [ 00:15:44.436 { 00:15:44.436 "name": null, 00:15:44.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.436 "is_configured": false, 00:15:44.436 "data_offset": 0, 00:15:44.436 "data_size": 65536 00:15:44.436 }, 00:15:44.436 { 00:15:44.436 "name": "BaseBdev2", 00:15:44.436 "uuid": "e7f25b93-ef41-59c4-ba52-b79c93dfa907", 00:15:44.436 "is_configured": true, 00:15:44.436 "data_offset": 0, 00:15:44.436 "data_size": 65536 00:15:44.436 }, 00:15:44.436 { 00:15:44.436 "name": "BaseBdev3", 00:15:44.436 "uuid": "49192267-4a18-5bac-aa2c-b2009f9107e7", 00:15:44.436 "is_configured": true, 00:15:44.436 "data_offset": 0, 00:15:44.436 "data_size": 65536 00:15:44.436 }, 00:15:44.436 { 00:15:44.436 "name": "BaseBdev4", 00:15:44.436 "uuid": "1bad4b7b-ddcd-5ca1-a21c-dce8ed9ef5e6", 00:15:44.436 "is_configured": true, 00:15:44.436 "data_offset": 0, 00:15:44.436 "data_size": 65536 00:15:44.436 } 00:15:44.436 ] 00:15:44.436 }' 00:15:44.436 11:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.436 11:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:44.436 11:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.436 11:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:44.436 11:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:44.436 11:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.436 11:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.436 [2024-11-20 11:25:27.400929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:44.436 11:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.436 11:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:44.436 [2024-11-20 11:25:27.451092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:44.436 [2024-11-20 11:25:27.453255] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:44.694 [2024-11-20 11:25:27.563696] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:44.694 [2024-11-20 11:25:27.564354] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:44.694 [2024-11-20 11:25:27.673885] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:44.694 [2024-11-20 11:25:27.674264] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:45.211 121.00 IOPS, 363.00 MiB/s [2024-11-20T11:25:28.327Z] [2024-11-20 11:25:28.147316] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:45.469 [2024-11-20 11:25:28.390921] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:45.469 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.469 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.469 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.469 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.469 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.469 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.469 11:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.469 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.469 11:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.469 11:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.469 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.469 "name": "raid_bdev1", 00:15:45.469 "uuid": "7fec2b7f-c321-454f-911b-996144ec96df", 00:15:45.469 "strip_size_kb": 0, 00:15:45.469 "state": "online", 00:15:45.469 "raid_level": "raid1", 00:15:45.469 "superblock": false, 00:15:45.469 "num_base_bdevs": 4, 00:15:45.469 "num_base_bdevs_discovered": 4, 00:15:45.469 "num_base_bdevs_operational": 4, 00:15:45.469 "process": { 00:15:45.469 "type": "rebuild", 00:15:45.469 "target": "spare", 00:15:45.469 "progress": { 00:15:45.469 "blocks": 14336, 00:15:45.469 "percent": 21 00:15:45.469 } 00:15:45.469 }, 00:15:45.469 "base_bdevs_list": [ 00:15:45.469 { 00:15:45.469 "name": "spare", 00:15:45.469 "uuid": "e28edc1b-5a84-5054-99e1-3ba1c1a5a923", 00:15:45.469 "is_configured": true, 00:15:45.469 "data_offset": 0, 00:15:45.469 "data_size": 65536 00:15:45.469 }, 00:15:45.469 { 00:15:45.469 "name": "BaseBdev2", 00:15:45.469 "uuid": "e7f25b93-ef41-59c4-ba52-b79c93dfa907", 00:15:45.469 "is_configured": true, 00:15:45.469 "data_offset": 0, 00:15:45.469 "data_size": 65536 00:15:45.469 }, 00:15:45.469 { 00:15:45.469 "name": "BaseBdev3", 00:15:45.469 "uuid": "49192267-4a18-5bac-aa2c-b2009f9107e7", 00:15:45.469 "is_configured": true, 00:15:45.469 "data_offset": 0, 00:15:45.469 "data_size": 65536 00:15:45.469 }, 00:15:45.469 { 00:15:45.469 "name": "BaseBdev4", 00:15:45.469 "uuid": "1bad4b7b-ddcd-5ca1-a21c-dce8ed9ef5e6", 00:15:45.469 "is_configured": true, 00:15:45.469 "data_offset": 0, 00:15:45.469 "data_size": 65536 00:15:45.469 } 00:15:45.469 ] 00:15:45.469 }' 00:15:45.470 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.470 [2024-11-20 11:25:28.523709] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:45.470 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.470 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.728 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.728 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:45.728 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:45.728 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:45.728 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:45.728 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:45.728 11:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.728 11:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.728 [2024-11-20 11:25:28.608579] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:45.728 [2024-11-20 11:25:28.732392] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:45.728 [2024-11-20 11:25:28.732472] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:45.728 11:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.728 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:45.728 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:45.728 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.728 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.728 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.728 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.728 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.728 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.728 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.728 11:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.728 11:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.728 11:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.728 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.728 "name": "raid_bdev1", 00:15:45.728 "uuid": "7fec2b7f-c321-454f-911b-996144ec96df", 00:15:45.728 "strip_size_kb": 0, 00:15:45.728 "state": "online", 00:15:45.728 "raid_level": "raid1", 00:15:45.728 "superblock": false, 00:15:45.729 "num_base_bdevs": 4, 00:15:45.729 "num_base_bdevs_discovered": 3, 00:15:45.729 "num_base_bdevs_operational": 3, 00:15:45.729 "process": { 00:15:45.729 "type": "rebuild", 00:15:45.729 "target": "spare", 00:15:45.729 "progress": { 00:15:45.729 "blocks": 18432, 00:15:45.729 "percent": 28 00:15:45.729 } 00:15:45.729 }, 00:15:45.729 "base_bdevs_list": [ 00:15:45.729 { 00:15:45.729 "name": "spare", 00:15:45.729 "uuid": "e28edc1b-5a84-5054-99e1-3ba1c1a5a923", 00:15:45.729 "is_configured": true, 00:15:45.729 "data_offset": 0, 00:15:45.729 "data_size": 65536 00:15:45.729 }, 00:15:45.729 { 00:15:45.729 "name": null, 00:15:45.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.729 "is_configured": false, 00:15:45.729 "data_offset": 0, 00:15:45.729 "data_size": 65536 00:15:45.729 }, 00:15:45.729 { 00:15:45.729 "name": "BaseBdev3", 00:15:45.729 "uuid": "49192267-4a18-5bac-aa2c-b2009f9107e7", 00:15:45.729 "is_configured": true, 00:15:45.729 "data_offset": 0, 00:15:45.729 "data_size": 65536 00:15:45.729 }, 00:15:45.729 { 00:15:45.729 "name": "BaseBdev4", 00:15:45.729 "uuid": "1bad4b7b-ddcd-5ca1-a21c-dce8ed9ef5e6", 00:15:45.729 "is_configured": true, 00:15:45.729 "data_offset": 0, 00:15:45.729 "data_size": 65536 00:15:45.729 } 00:15:45.729 ] 00:15:45.729 }' 00:15:45.729 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.729 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.729 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.987 [2024-11-20 11:25:28.879567] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:45.987 [2024-11-20 11:25:28.880176] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:45.987 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.987 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=494 00:15:45.987 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:45.987 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.987 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.987 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.987 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.987 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.987 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.987 11:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.987 11:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.987 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.987 11:25:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.987 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.987 "name": "raid_bdev1", 00:15:45.987 "uuid": "7fec2b7f-c321-454f-911b-996144ec96df", 00:15:45.987 "strip_size_kb": 0, 00:15:45.987 "state": "online", 00:15:45.987 "raid_level": "raid1", 00:15:45.987 "superblock": false, 00:15:45.987 "num_base_bdevs": 4, 00:15:45.987 "num_base_bdevs_discovered": 3, 00:15:45.987 "num_base_bdevs_operational": 3, 00:15:45.987 "process": { 00:15:45.987 "type": "rebuild", 00:15:45.987 "target": "spare", 00:15:45.987 "progress": { 00:15:45.987 "blocks": 20480, 00:15:45.987 "percent": 31 00:15:45.987 } 00:15:45.987 }, 00:15:45.987 "base_bdevs_list": [ 00:15:45.987 { 00:15:45.987 "name": "spare", 00:15:45.987 "uuid": "e28edc1b-5a84-5054-99e1-3ba1c1a5a923", 00:15:45.987 "is_configured": true, 00:15:45.987 "data_offset": 0, 00:15:45.987 "data_size": 65536 00:15:45.987 }, 00:15:45.987 { 00:15:45.987 "name": null, 00:15:45.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.987 "is_configured": false, 00:15:45.987 "data_offset": 0, 00:15:45.987 "data_size": 65536 00:15:45.987 }, 00:15:45.987 { 00:15:45.987 "name": "BaseBdev3", 00:15:45.987 "uuid": "49192267-4a18-5bac-aa2c-b2009f9107e7", 00:15:45.987 "is_configured": true, 00:15:45.987 "data_offset": 0, 00:15:45.987 "data_size": 65536 00:15:45.987 }, 00:15:45.987 { 00:15:45.987 "name": "BaseBdev4", 00:15:45.987 "uuid": "1bad4b7b-ddcd-5ca1-a21c-dce8ed9ef5e6", 00:15:45.987 "is_configured": true, 00:15:45.987 "data_offset": 0, 00:15:45.987 "data_size": 65536 00:15:45.987 } 00:15:45.987 ] 00:15:45.987 }' 00:15:45.987 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.987 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.987 11:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.987 120.25 IOPS, 360.75 MiB/s [2024-11-20T11:25:29.103Z] 11:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.987 11:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:45.987 [2024-11-20 11:25:29.100696] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:46.924 [2024-11-20 11:25:29.870082] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:46.924 106.80 IOPS, 320.40 MiB/s [2024-11-20T11:25:30.040Z] 11:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:46.924 11:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.924 11:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.924 11:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.924 11:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.924 11:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.924 11:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.924 11:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.924 11:25:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.924 11:25:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:47.183 11:25:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.183 11:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.183 "name": "raid_bdev1", 00:15:47.183 "uuid": "7fec2b7f-c321-454f-911b-996144ec96df", 00:15:47.183 "strip_size_kb": 0, 00:15:47.183 "state": "online", 00:15:47.183 "raid_level": "raid1", 00:15:47.183 "superblock": false, 00:15:47.183 "num_base_bdevs": 4, 00:15:47.183 "num_base_bdevs_discovered": 3, 00:15:47.183 "num_base_bdevs_operational": 3, 00:15:47.183 "process": { 00:15:47.183 "type": "rebuild", 00:15:47.183 "target": "spare", 00:15:47.183 "progress": { 00:15:47.183 "blocks": 34816, 00:15:47.183 "percent": 53 00:15:47.183 } 00:15:47.183 }, 00:15:47.183 "base_bdevs_list": [ 00:15:47.183 { 00:15:47.183 "name": "spare", 00:15:47.183 "uuid": "e28edc1b-5a84-5054-99e1-3ba1c1a5a923", 00:15:47.183 "is_configured": true, 00:15:47.183 "data_offset": 0, 00:15:47.183 "data_size": 65536 00:15:47.183 }, 00:15:47.183 { 00:15:47.183 "name": null, 00:15:47.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.183 "is_configured": false, 00:15:47.183 "data_offset": 0, 00:15:47.183 "data_size": 65536 00:15:47.183 }, 00:15:47.183 { 00:15:47.183 "name": "BaseBdev3", 00:15:47.183 "uuid": "49192267-4a18-5bac-aa2c-b2009f9107e7", 00:15:47.183 "is_configured": true, 00:15:47.183 "data_offset": 0, 00:15:47.183 "data_size": 65536 00:15:47.183 }, 00:15:47.183 { 00:15:47.183 "name": "BaseBdev4", 00:15:47.183 "uuid": "1bad4b7b-ddcd-5ca1-a21c-dce8ed9ef5e6", 00:15:47.183 "is_configured": true, 00:15:47.183 "data_offset": 0, 00:15:47.184 "data_size": 65536 00:15:47.184 } 00:15:47.184 ] 00:15:47.184 }' 00:15:47.184 11:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.184 11:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.184 11:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.184 11:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.184 11:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:47.184 [2024-11-20 11:25:30.211603] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:47.442 [2024-11-20 11:25:30.540227] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:48.010 [2024-11-20 11:25:30.861967] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:48.010 [2024-11-20 11:25:30.972815] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:48.269 98.67 IOPS, 296.00 MiB/s [2024-11-20T11:25:31.385Z] 11:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:48.269 11:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.269 11:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.269 11:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.269 11:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.269 11:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.269 11:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.269 11:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.269 11:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.269 11:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.269 [2024-11-20 11:25:31.198631] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:15:48.269 11:25:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.269 11:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.269 "name": "raid_bdev1", 00:15:48.269 "uuid": "7fec2b7f-c321-454f-911b-996144ec96df", 00:15:48.269 "strip_size_kb": 0, 00:15:48.269 "state": "online", 00:15:48.269 "raid_level": "raid1", 00:15:48.269 "superblock": false, 00:15:48.269 "num_base_bdevs": 4, 00:15:48.269 "num_base_bdevs_discovered": 3, 00:15:48.269 "num_base_bdevs_operational": 3, 00:15:48.269 "process": { 00:15:48.269 "type": "rebuild", 00:15:48.269 "target": "spare", 00:15:48.269 "progress": { 00:15:48.269 "blocks": 55296, 00:15:48.269 "percent": 84 00:15:48.269 } 00:15:48.269 }, 00:15:48.269 "base_bdevs_list": [ 00:15:48.269 { 00:15:48.269 "name": "spare", 00:15:48.269 "uuid": "e28edc1b-5a84-5054-99e1-3ba1c1a5a923", 00:15:48.269 "is_configured": true, 00:15:48.269 "data_offset": 0, 00:15:48.269 "data_size": 65536 00:15:48.269 }, 00:15:48.269 { 00:15:48.269 "name": null, 00:15:48.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.269 "is_configured": false, 00:15:48.269 "data_offset": 0, 00:15:48.269 "data_size": 65536 00:15:48.269 }, 00:15:48.269 { 00:15:48.269 "name": "BaseBdev3", 00:15:48.269 "uuid": "49192267-4a18-5bac-aa2c-b2009f9107e7", 00:15:48.269 "is_configured": true, 00:15:48.269 "data_offset": 0, 00:15:48.269 "data_size": 65536 00:15:48.269 }, 00:15:48.269 { 00:15:48.269 "name": "BaseBdev4", 00:15:48.269 "uuid": "1bad4b7b-ddcd-5ca1-a21c-dce8ed9ef5e6", 00:15:48.269 "is_configured": true, 00:15:48.269 "data_offset": 0, 00:15:48.269 "data_size": 65536 00:15:48.269 } 00:15:48.269 ] 00:15:48.269 }' 00:15:48.269 11:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.269 11:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:48.269 11:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.269 11:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.269 11:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:48.542 [2024-11-20 11:25:31.412587] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:48.845 [2024-11-20 11:25:31.744094] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:48.845 [2024-11-20 11:25:31.768999] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:48.845 [2024-11-20 11:25:31.773685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.362 90.00 IOPS, 270.00 MiB/s [2024-11-20T11:25:32.478Z] 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:49.362 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.362 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.363 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.363 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.363 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.363 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.363 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.363 11:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.363 11:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.363 11:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.363 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.363 "name": "raid_bdev1", 00:15:49.363 "uuid": "7fec2b7f-c321-454f-911b-996144ec96df", 00:15:49.363 "strip_size_kb": 0, 00:15:49.363 "state": "online", 00:15:49.363 "raid_level": "raid1", 00:15:49.363 "superblock": false, 00:15:49.363 "num_base_bdevs": 4, 00:15:49.363 "num_base_bdevs_discovered": 3, 00:15:49.363 "num_base_bdevs_operational": 3, 00:15:49.363 "base_bdevs_list": [ 00:15:49.363 { 00:15:49.363 "name": "spare", 00:15:49.363 "uuid": "e28edc1b-5a84-5054-99e1-3ba1c1a5a923", 00:15:49.363 "is_configured": true, 00:15:49.363 "data_offset": 0, 00:15:49.363 "data_size": 65536 00:15:49.363 }, 00:15:49.363 { 00:15:49.363 "name": null, 00:15:49.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.363 "is_configured": false, 00:15:49.363 "data_offset": 0, 00:15:49.363 "data_size": 65536 00:15:49.363 }, 00:15:49.363 { 00:15:49.363 "name": "BaseBdev3", 00:15:49.363 "uuid": "49192267-4a18-5bac-aa2c-b2009f9107e7", 00:15:49.363 "is_configured": true, 00:15:49.363 "data_offset": 0, 00:15:49.363 "data_size": 65536 00:15:49.363 }, 00:15:49.363 { 00:15:49.363 "name": "BaseBdev4", 00:15:49.363 "uuid": "1bad4b7b-ddcd-5ca1-a21c-dce8ed9ef5e6", 00:15:49.363 "is_configured": true, 00:15:49.363 "data_offset": 0, 00:15:49.363 "data_size": 65536 00:15:49.363 } 00:15:49.363 ] 00:15:49.363 }' 00:15:49.363 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.363 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:49.363 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.623 "name": "raid_bdev1", 00:15:49.623 "uuid": "7fec2b7f-c321-454f-911b-996144ec96df", 00:15:49.623 "strip_size_kb": 0, 00:15:49.623 "state": "online", 00:15:49.623 "raid_level": "raid1", 00:15:49.623 "superblock": false, 00:15:49.623 "num_base_bdevs": 4, 00:15:49.623 "num_base_bdevs_discovered": 3, 00:15:49.623 "num_base_bdevs_operational": 3, 00:15:49.623 "base_bdevs_list": [ 00:15:49.623 { 00:15:49.623 "name": "spare", 00:15:49.623 "uuid": "e28edc1b-5a84-5054-99e1-3ba1c1a5a923", 00:15:49.623 "is_configured": true, 00:15:49.623 "data_offset": 0, 00:15:49.623 "data_size": 65536 00:15:49.623 }, 00:15:49.623 { 00:15:49.623 "name": null, 00:15:49.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.623 "is_configured": false, 00:15:49.623 "data_offset": 0, 00:15:49.623 "data_size": 65536 00:15:49.623 }, 00:15:49.623 { 00:15:49.623 "name": "BaseBdev3", 00:15:49.623 "uuid": "49192267-4a18-5bac-aa2c-b2009f9107e7", 00:15:49.623 "is_configured": true, 00:15:49.623 "data_offset": 0, 00:15:49.623 "data_size": 65536 00:15:49.623 }, 00:15:49.623 { 00:15:49.623 "name": "BaseBdev4", 00:15:49.623 "uuid": "1bad4b7b-ddcd-5ca1-a21c-dce8ed9ef5e6", 00:15:49.623 "is_configured": true, 00:15:49.623 "data_offset": 0, 00:15:49.623 "data_size": 65536 00:15:49.623 } 00:15:49.623 ] 00:15:49.623 }' 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.623 "name": "raid_bdev1", 00:15:49.623 "uuid": "7fec2b7f-c321-454f-911b-996144ec96df", 00:15:49.623 "strip_size_kb": 0, 00:15:49.623 "state": "online", 00:15:49.623 "raid_level": "raid1", 00:15:49.623 "superblock": false, 00:15:49.623 "num_base_bdevs": 4, 00:15:49.623 "num_base_bdevs_discovered": 3, 00:15:49.623 "num_base_bdevs_operational": 3, 00:15:49.623 "base_bdevs_list": [ 00:15:49.623 { 00:15:49.623 "name": "spare", 00:15:49.623 "uuid": "e28edc1b-5a84-5054-99e1-3ba1c1a5a923", 00:15:49.623 "is_configured": true, 00:15:49.623 "data_offset": 0, 00:15:49.623 "data_size": 65536 00:15:49.623 }, 00:15:49.623 { 00:15:49.623 "name": null, 00:15:49.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.623 "is_configured": false, 00:15:49.623 "data_offset": 0, 00:15:49.623 "data_size": 65536 00:15:49.623 }, 00:15:49.623 { 00:15:49.623 "name": "BaseBdev3", 00:15:49.623 "uuid": "49192267-4a18-5bac-aa2c-b2009f9107e7", 00:15:49.623 "is_configured": true, 00:15:49.623 "data_offset": 0, 00:15:49.623 "data_size": 65536 00:15:49.623 }, 00:15:49.623 { 00:15:49.623 "name": "BaseBdev4", 00:15:49.623 "uuid": "1bad4b7b-ddcd-5ca1-a21c-dce8ed9ef5e6", 00:15:49.623 "is_configured": true, 00:15:49.623 "data_offset": 0, 00:15:49.623 "data_size": 65536 00:15:49.623 } 00:15:49.623 ] 00:15:49.623 }' 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.623 11:25:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.143 84.75 IOPS, 254.25 MiB/s [2024-11-20T11:25:33.259Z] 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:50.143 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.143 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.143 [2024-11-20 11:25:33.079836] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.143 [2024-11-20 11:25:33.079874] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:50.143 00:15:50.143 Latency(us) 00:15:50.143 [2024-11-20T11:25:33.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.143 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:50.143 raid_bdev1 : 8.13 83.90 251.71 0.00 0.00 17147.97 357.73 111726.00 00:15:50.143 [2024-11-20T11:25:33.259Z] =================================================================================================================== 00:15:50.143 [2024-11-20T11:25:33.259Z] Total : 83.90 251.71 0.00 0.00 17147.97 357.73 111726.00 00:15:50.143 [2024-11-20 11:25:33.118699] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.143 [2024-11-20 11:25:33.118762] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.143 [2024-11-20 11:25:33.118887] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.143 [2024-11-20 11:25:33.118900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:50.143 { 00:15:50.143 "results": [ 00:15:50.143 { 00:15:50.143 "job": "raid_bdev1", 00:15:50.143 "core_mask": "0x1", 00:15:50.143 "workload": "randrw", 00:15:50.143 "percentage": 50, 00:15:50.143 "status": "finished", 00:15:50.143 "queue_depth": 2, 00:15:50.143 "io_size": 3145728, 00:15:50.143 "runtime": 8.128522, 00:15:50.143 "iops": 83.90209191781729, 00:15:50.143 "mibps": 251.70627575345185, 00:15:50.143 "io_failed": 0, 00:15:50.143 "io_timeout": 0, 00:15:50.143 "avg_latency_us": 17147.9748901894, 00:15:50.143 "min_latency_us": 357.7292576419214, 00:15:50.143 "max_latency_us": 111726.00174672488 00:15:50.143 } 00:15:50.143 ], 00:15:50.143 "core_count": 1 00:15:50.143 } 00:15:50.143 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.143 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.143 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.143 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:50.143 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.143 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.143 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:50.143 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:50.143 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:50.143 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:50.143 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:50.143 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:50.143 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:50.143 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:50.143 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:50.143 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:50.143 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:50.143 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:50.143 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:50.402 /dev/nbd0 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:50.402 1+0 records in 00:15:50.402 1+0 records out 00:15:50.402 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418792 s, 9.8 MB/s 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:50.402 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:50.662 /dev/nbd1 00:15:50.662 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:50.662 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:50.662 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:50.662 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:50.662 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:50.662 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:50.662 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:50.662 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:50.662 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:50.662 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:50.662 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:50.662 1+0 records in 00:15:50.662 1+0 records out 00:15:50.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286747 s, 14.3 MB/s 00:15:50.662 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.662 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:50.662 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.921 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:50.921 11:25:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:50.921 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:50.921 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:50.921 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:50.921 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:50.921 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:50.921 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:50.921 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:50.921 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:50.921 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.921 11:25:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:51.180 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:51.180 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:51.180 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:51.180 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.180 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.180 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:51.180 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:51.180 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.180 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:51.180 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:51.180 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:51.180 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.180 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:51.180 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:51.180 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:51.180 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:51.180 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:51.180 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:51.180 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:51.180 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:51.439 /dev/nbd1 00:15:51.439 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:51.439 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:51.439 11:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:51.439 11:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:51.439 11:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:51.439 11:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:51.439 11:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:51.439 11:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:51.439 11:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:51.439 11:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:51.439 11:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.439 1+0 records in 00:15:51.439 1+0 records out 00:15:51.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305634 s, 13.4 MB/s 00:15:51.439 11:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.439 11:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:51.439 11:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.439 11:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:51.439 11:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:51.439 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:51.439 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:51.439 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:51.699 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:51.699 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.699 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:51.699 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:51.699 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:51.699 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.699 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:51.957 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:51.957 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:51.957 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:51.957 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.957 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.957 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:51.957 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:51.957 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.957 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:51.957 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.957 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:51.957 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:51.957 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:51.957 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.957 11:25:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:52.279 11:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:52.279 11:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:52.279 11:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:52.279 11:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.279 11:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.279 11:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:52.279 11:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:52.279 11:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.279 11:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:52.279 11:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78910 00:15:52.279 11:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78910 ']' 00:15:52.279 11:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78910 00:15:52.279 11:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:15:52.279 11:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.279 11:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78910 00:15:52.279 11:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:52.279 11:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:52.279 killing process with pid 78910 00:15:52.279 11:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78910' 00:15:52.279 11:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78910 00:15:52.279 Received shutdown signal, test time was about 10.287232 seconds 00:15:52.279 00:15:52.279 Latency(us) 00:15:52.279 [2024-11-20T11:25:35.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.279 [2024-11-20T11:25:35.395Z] =================================================================================================================== 00:15:52.279 [2024-11-20T11:25:35.395Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:52.279 [2024-11-20 11:25:35.246315] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.279 11:25:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78910 00:15:52.847 [2024-11-20 11:25:35.731001] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:54.228 00:15:54.228 real 0m13.994s 00:15:54.228 user 0m17.725s 00:15:54.228 sys 0m1.876s 00:15:54.228 ************************************ 00:15:54.228 END TEST raid_rebuild_test_io 00:15:54.228 ************************************ 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.228 11:25:37 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:15:54.228 11:25:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:54.228 11:25:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.228 11:25:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:54.228 ************************************ 00:15:54.228 START TEST raid_rebuild_test_sb_io 00:15:54.228 ************************************ 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79325 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79325 00:15:54.228 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79325 ']' 00:15:54.229 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.229 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:54.229 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.229 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:54.229 11:25:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.229 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:54.229 Zero copy mechanism will not be used. 00:15:54.229 [2024-11-20 11:25:37.311760] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:15:54.229 [2024-11-20 11:25:37.311973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79325 ] 00:15:54.487 [2024-11-20 11:25:37.505938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.745 [2024-11-20 11:25:37.639417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.004 [2024-11-20 11:25:37.868556] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.004 [2024-11-20 11:25:37.868632] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.263 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.263 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:15:55.263 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.263 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:55.263 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.263 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.263 BaseBdev1_malloc 00:15:55.263 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.263 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:55.263 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.263 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.263 [2024-11-20 11:25:38.309752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:55.263 [2024-11-20 11:25:38.309846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.263 [2024-11-20 11:25:38.309877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:55.263 [2024-11-20 11:25:38.309890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.263 [2024-11-20 11:25:38.312323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.263 [2024-11-20 11:25:38.312445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:55.263 BaseBdev1 00:15:55.263 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.263 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.263 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:55.263 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.263 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.263 BaseBdev2_malloc 00:15:55.263 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.263 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:55.263 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.263 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.263 [2024-11-20 11:25:38.366416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:55.263 [2024-11-20 11:25:38.366585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.263 [2024-11-20 11:25:38.366617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:55.263 [2024-11-20 11:25:38.366633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.263 [2024-11-20 11:25:38.369233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.263 [2024-11-20 11:25:38.369291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:55.263 BaseBdev2 00:15:55.263 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.263 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.263 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:55.263 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.263 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.539 BaseBdev3_malloc 00:15:55.539 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.539 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:55.539 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.539 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.539 [2024-11-20 11:25:38.439093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:55.539 [2024-11-20 11:25:38.439162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.539 [2024-11-20 11:25:38.439190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:55.540 [2024-11-20 11:25:38.439204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.540 [2024-11-20 11:25:38.441577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.540 [2024-11-20 11:25:38.441618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:55.540 BaseBdev3 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.540 BaseBdev4_malloc 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.540 [2024-11-20 11:25:38.504607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:55.540 [2024-11-20 11:25:38.504788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.540 [2024-11-20 11:25:38.504819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:55.540 [2024-11-20 11:25:38.504832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.540 [2024-11-20 11:25:38.507344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.540 [2024-11-20 11:25:38.507396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:55.540 BaseBdev4 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.540 spare_malloc 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.540 spare_delay 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.540 [2024-11-20 11:25:38.577985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:55.540 [2024-11-20 11:25:38.578071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.540 [2024-11-20 11:25:38.578099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:55.540 [2024-11-20 11:25:38.578112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.540 [2024-11-20 11:25:38.580675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.540 [2024-11-20 11:25:38.580737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:55.540 spare 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.540 [2024-11-20 11:25:38.590013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.540 [2024-11-20 11:25:38.592226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:55.540 [2024-11-20 11:25:38.592306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:55.540 [2024-11-20 11:25:38.592366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:55.540 [2024-11-20 11:25:38.592590] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:55.540 [2024-11-20 11:25:38.592612] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:55.540 [2024-11-20 11:25:38.592932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:55.540 [2024-11-20 11:25:38.593158] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:55.540 [2024-11-20 11:25:38.593171] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:55.540 [2024-11-20 11:25:38.593388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.540 "name": "raid_bdev1", 00:15:55.540 "uuid": "26b4eb6e-b4e3-4feb-acf3-6236bd3d230f", 00:15:55.540 "strip_size_kb": 0, 00:15:55.540 "state": "online", 00:15:55.540 "raid_level": "raid1", 00:15:55.540 "superblock": true, 00:15:55.540 "num_base_bdevs": 4, 00:15:55.540 "num_base_bdevs_discovered": 4, 00:15:55.540 "num_base_bdevs_operational": 4, 00:15:55.540 "base_bdevs_list": [ 00:15:55.540 { 00:15:55.540 "name": "BaseBdev1", 00:15:55.540 "uuid": "f3884667-d2b2-5845-a945-47e4cde243f7", 00:15:55.540 "is_configured": true, 00:15:55.540 "data_offset": 2048, 00:15:55.540 "data_size": 63488 00:15:55.540 }, 00:15:55.540 { 00:15:55.540 "name": "BaseBdev2", 00:15:55.540 "uuid": "7d63e995-1c5c-5577-b932-8aa8dd31ba92", 00:15:55.540 "is_configured": true, 00:15:55.540 "data_offset": 2048, 00:15:55.540 "data_size": 63488 00:15:55.540 }, 00:15:55.540 { 00:15:55.540 "name": "BaseBdev3", 00:15:55.540 "uuid": "72a7fb2d-dcbf-5a48-b045-593b88133d87", 00:15:55.540 "is_configured": true, 00:15:55.540 "data_offset": 2048, 00:15:55.540 "data_size": 63488 00:15:55.540 }, 00:15:55.540 { 00:15:55.540 "name": "BaseBdev4", 00:15:55.540 "uuid": "dbfd61a9-c7d7-59ff-ad07-8ba4151d29a7", 00:15:55.540 "is_configured": true, 00:15:55.540 "data_offset": 2048, 00:15:55.540 "data_size": 63488 00:15:55.540 } 00:15:55.540 ] 00:15:55.540 }' 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.540 11:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.109 [2024-11-20 11:25:39.069647] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.109 [2024-11-20 11:25:39.165049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.109 "name": "raid_bdev1", 00:15:56.109 "uuid": "26b4eb6e-b4e3-4feb-acf3-6236bd3d230f", 00:15:56.109 "strip_size_kb": 0, 00:15:56.109 "state": "online", 00:15:56.109 "raid_level": "raid1", 00:15:56.109 "superblock": true, 00:15:56.109 "num_base_bdevs": 4, 00:15:56.109 "num_base_bdevs_discovered": 3, 00:15:56.109 "num_base_bdevs_operational": 3, 00:15:56.109 "base_bdevs_list": [ 00:15:56.109 { 00:15:56.109 "name": null, 00:15:56.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.109 "is_configured": false, 00:15:56.109 "data_offset": 0, 00:15:56.109 "data_size": 63488 00:15:56.109 }, 00:15:56.109 { 00:15:56.109 "name": "BaseBdev2", 00:15:56.109 "uuid": "7d63e995-1c5c-5577-b932-8aa8dd31ba92", 00:15:56.109 "is_configured": true, 00:15:56.109 "data_offset": 2048, 00:15:56.109 "data_size": 63488 00:15:56.109 }, 00:15:56.109 { 00:15:56.109 "name": "BaseBdev3", 00:15:56.109 "uuid": "72a7fb2d-dcbf-5a48-b045-593b88133d87", 00:15:56.109 "is_configured": true, 00:15:56.109 "data_offset": 2048, 00:15:56.109 "data_size": 63488 00:15:56.109 }, 00:15:56.109 { 00:15:56.109 "name": "BaseBdev4", 00:15:56.109 "uuid": "dbfd61a9-c7d7-59ff-ad07-8ba4151d29a7", 00:15:56.109 "is_configured": true, 00:15:56.109 "data_offset": 2048, 00:15:56.109 "data_size": 63488 00:15:56.109 } 00:15:56.109 ] 00:15:56.109 }' 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.109 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.368 [2024-11-20 11:25:39.322502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:56.368 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:56.368 Zero copy mechanism will not be used. 00:15:56.368 Running I/O for 60 seconds... 00:15:56.626 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:56.626 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.626 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.626 [2024-11-20 11:25:39.612537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:56.626 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.626 11:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:56.626 [2024-11-20 11:25:39.663738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:56.626 [2024-11-20 11:25:39.666159] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:56.887 [2024-11-20 11:25:39.786578] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:56.887 [2024-11-20 11:25:39.787291] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:56.887 [2024-11-20 11:25:39.890952] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:56.887 [2024-11-20 11:25:39.891329] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:57.146 [2024-11-20 11:25:40.244985] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:57.405 146.00 IOPS, 438.00 MiB/s [2024-11-20T11:25:40.522Z] [2024-11-20 11:25:40.367513] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:57.406 [2024-11-20 11:25:40.367996] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:57.665 [2024-11-20 11:25:40.627242] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:57.665 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.665 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.665 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.665 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.665 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.665 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.665 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.665 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.665 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.665 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.665 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.665 "name": "raid_bdev1", 00:15:57.665 "uuid": "26b4eb6e-b4e3-4feb-acf3-6236bd3d230f", 00:15:57.665 "strip_size_kb": 0, 00:15:57.665 "state": "online", 00:15:57.665 "raid_level": "raid1", 00:15:57.665 "superblock": true, 00:15:57.665 "num_base_bdevs": 4, 00:15:57.665 "num_base_bdevs_discovered": 4, 00:15:57.665 "num_base_bdevs_operational": 4, 00:15:57.665 "process": { 00:15:57.665 "type": "rebuild", 00:15:57.665 "target": "spare", 00:15:57.665 "progress": { 00:15:57.665 "blocks": 14336, 00:15:57.665 "percent": 22 00:15:57.665 } 00:15:57.665 }, 00:15:57.665 "base_bdevs_list": [ 00:15:57.665 { 00:15:57.665 "name": "spare", 00:15:57.665 "uuid": "a9d46fb6-d712-5eec-a5d9-af0d0693fe73", 00:15:57.665 "is_configured": true, 00:15:57.665 "data_offset": 2048, 00:15:57.665 "data_size": 63488 00:15:57.665 }, 00:15:57.665 { 00:15:57.665 "name": "BaseBdev2", 00:15:57.665 "uuid": "7d63e995-1c5c-5577-b932-8aa8dd31ba92", 00:15:57.665 "is_configured": true, 00:15:57.665 "data_offset": 2048, 00:15:57.665 "data_size": 63488 00:15:57.665 }, 00:15:57.665 { 00:15:57.665 "name": "BaseBdev3", 00:15:57.665 "uuid": "72a7fb2d-dcbf-5a48-b045-593b88133d87", 00:15:57.665 "is_configured": true, 00:15:57.665 "data_offset": 2048, 00:15:57.665 "data_size": 63488 00:15:57.665 }, 00:15:57.665 { 00:15:57.665 "name": "BaseBdev4", 00:15:57.665 "uuid": "dbfd61a9-c7d7-59ff-ad07-8ba4151d29a7", 00:15:57.665 "is_configured": true, 00:15:57.665 "data_offset": 2048, 00:15:57.665 "data_size": 63488 00:15:57.665 } 00:15:57.665 ] 00:15:57.665 }' 00:15:57.665 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.665 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:57.665 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.924 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:57.924 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:57.924 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.924 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.924 [2024-11-20 11:25:40.803846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:57.924 [2024-11-20 11:25:40.841760] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:57.924 [2024-11-20 11:25:40.842222] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:57.924 [2024-11-20 11:25:40.857857] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:57.924 [2024-11-20 11:25:40.872440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.924 [2024-11-20 11:25:40.872625] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:57.924 [2024-11-20 11:25:40.872666] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:57.924 [2024-11-20 11:25:40.917587] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:57.924 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.924 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:57.924 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.924 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.924 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.924 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.924 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.924 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.924 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.924 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.924 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.924 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.924 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.924 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.924 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.924 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.924 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.924 "name": "raid_bdev1", 00:15:57.924 "uuid": "26b4eb6e-b4e3-4feb-acf3-6236bd3d230f", 00:15:57.924 "strip_size_kb": 0, 00:15:57.924 "state": "online", 00:15:57.924 "raid_level": "raid1", 00:15:57.924 "superblock": true, 00:15:57.924 "num_base_bdevs": 4, 00:15:57.924 "num_base_bdevs_discovered": 3, 00:15:57.924 "num_base_bdevs_operational": 3, 00:15:57.924 "base_bdevs_list": [ 00:15:57.924 { 00:15:57.924 "name": null, 00:15:57.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.924 "is_configured": false, 00:15:57.924 "data_offset": 0, 00:15:57.924 "data_size": 63488 00:15:57.924 }, 00:15:57.924 { 00:15:57.924 "name": "BaseBdev2", 00:15:57.924 "uuid": "7d63e995-1c5c-5577-b932-8aa8dd31ba92", 00:15:57.924 "is_configured": true, 00:15:57.924 "data_offset": 2048, 00:15:57.924 "data_size": 63488 00:15:57.924 }, 00:15:57.924 { 00:15:57.924 "name": "BaseBdev3", 00:15:57.924 "uuid": "72a7fb2d-dcbf-5a48-b045-593b88133d87", 00:15:57.924 "is_configured": true, 00:15:57.924 "data_offset": 2048, 00:15:57.924 "data_size": 63488 00:15:57.924 }, 00:15:57.924 { 00:15:57.924 "name": "BaseBdev4", 00:15:57.924 "uuid": "dbfd61a9-c7d7-59ff-ad07-8ba4151d29a7", 00:15:57.924 "is_configured": true, 00:15:57.924 "data_offset": 2048, 00:15:57.925 "data_size": 63488 00:15:57.925 } 00:15:57.925 ] 00:15:57.925 }' 00:15:57.925 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.925 11:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.491 137.50 IOPS, 412.50 MiB/s [2024-11-20T11:25:41.607Z] 11:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:58.491 11:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.491 11:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:58.491 11:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:58.491 11:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.491 11:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.491 11:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.491 11:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.491 11:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.491 11:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.491 11:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.491 "name": "raid_bdev1", 00:15:58.491 "uuid": "26b4eb6e-b4e3-4feb-acf3-6236bd3d230f", 00:15:58.491 "strip_size_kb": 0, 00:15:58.491 "state": "online", 00:15:58.491 "raid_level": "raid1", 00:15:58.491 "superblock": true, 00:15:58.491 "num_base_bdevs": 4, 00:15:58.491 "num_base_bdevs_discovered": 3, 00:15:58.491 "num_base_bdevs_operational": 3, 00:15:58.491 "base_bdevs_list": [ 00:15:58.491 { 00:15:58.491 "name": null, 00:15:58.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.491 "is_configured": false, 00:15:58.491 "data_offset": 0, 00:15:58.491 "data_size": 63488 00:15:58.491 }, 00:15:58.491 { 00:15:58.491 "name": "BaseBdev2", 00:15:58.491 "uuid": "7d63e995-1c5c-5577-b932-8aa8dd31ba92", 00:15:58.491 "is_configured": true, 00:15:58.491 "data_offset": 2048, 00:15:58.491 "data_size": 63488 00:15:58.491 }, 00:15:58.491 { 00:15:58.491 "name": "BaseBdev3", 00:15:58.491 "uuid": "72a7fb2d-dcbf-5a48-b045-593b88133d87", 00:15:58.491 "is_configured": true, 00:15:58.491 "data_offset": 2048, 00:15:58.491 "data_size": 63488 00:15:58.491 }, 00:15:58.491 { 00:15:58.491 "name": "BaseBdev4", 00:15:58.491 "uuid": "dbfd61a9-c7d7-59ff-ad07-8ba4151d29a7", 00:15:58.491 "is_configured": true, 00:15:58.491 "data_offset": 2048, 00:15:58.491 "data_size": 63488 00:15:58.491 } 00:15:58.491 ] 00:15:58.491 }' 00:15:58.491 11:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.491 11:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:58.491 11:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.491 11:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:58.491 11:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:58.491 11:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.491 11:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.491 [2024-11-20 11:25:41.556189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:58.766 11:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.766 11:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:58.766 [2024-11-20 11:25:41.634986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:58.766 [2024-11-20 11:25:41.637358] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:58.766 [2024-11-20 11:25:41.757475] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:58.766 [2024-11-20 11:25:41.758121] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:59.057 [2024-11-20 11:25:41.977667] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:59.057 [2024-11-20 11:25:41.978624] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:59.316 [2024-11-20 11:25:42.324823] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:59.574 122.33 IOPS, 367.00 MiB/s [2024-11-20T11:25:42.690Z] [2024-11-20 11:25:42.547885] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:59.574 [2024-11-20 11:25:42.548365] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:59.574 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.574 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.574 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.574 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.574 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.574 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.574 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.574 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.574 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.574 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.574 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.574 "name": "raid_bdev1", 00:15:59.574 "uuid": "26b4eb6e-b4e3-4feb-acf3-6236bd3d230f", 00:15:59.574 "strip_size_kb": 0, 00:15:59.574 "state": "online", 00:15:59.574 "raid_level": "raid1", 00:15:59.574 "superblock": true, 00:15:59.574 "num_base_bdevs": 4, 00:15:59.574 "num_base_bdevs_discovered": 4, 00:15:59.574 "num_base_bdevs_operational": 4, 00:15:59.574 "process": { 00:15:59.574 "type": "rebuild", 00:15:59.574 "target": "spare", 00:15:59.574 "progress": { 00:15:59.574 "blocks": 10240, 00:15:59.574 "percent": 16 00:15:59.574 } 00:15:59.574 }, 00:15:59.574 "base_bdevs_list": [ 00:15:59.574 { 00:15:59.574 "name": "spare", 00:15:59.574 "uuid": "a9d46fb6-d712-5eec-a5d9-af0d0693fe73", 00:15:59.574 "is_configured": true, 00:15:59.574 "data_offset": 2048, 00:15:59.574 "data_size": 63488 00:15:59.574 }, 00:15:59.574 { 00:15:59.574 "name": "BaseBdev2", 00:15:59.574 "uuid": "7d63e995-1c5c-5577-b932-8aa8dd31ba92", 00:15:59.574 "is_configured": true, 00:15:59.574 "data_offset": 2048, 00:15:59.574 "data_size": 63488 00:15:59.574 }, 00:15:59.574 { 00:15:59.574 "name": "BaseBdev3", 00:15:59.574 "uuid": "72a7fb2d-dcbf-5a48-b045-593b88133d87", 00:15:59.574 "is_configured": true, 00:15:59.574 "data_offset": 2048, 00:15:59.574 "data_size": 63488 00:15:59.574 }, 00:15:59.574 { 00:15:59.574 "name": "BaseBdev4", 00:15:59.574 "uuid": "dbfd61a9-c7d7-59ff-ad07-8ba4151d29a7", 00:15:59.574 "is_configured": true, 00:15:59.574 "data_offset": 2048, 00:15:59.574 "data_size": 63488 00:15:59.574 } 00:15:59.574 ] 00:15:59.574 }' 00:15:59.574 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.833 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.833 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.833 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.833 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:59.833 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:59.833 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:59.833 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:59.833 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:59.833 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:59.833 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:59.833 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.833 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.833 [2024-11-20 11:25:42.748930] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:59.833 [2024-11-20 11:25:42.781305] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:00.092 [2024-11-20 11:25:42.983836] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:00.092 [2024-11-20 11:25:42.983970] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:00.092 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.092 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:00.092 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:00.092 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.092 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.092 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.092 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.092 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.092 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.092 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.092 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.092 11:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.092 11:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.092 11:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.092 "name": "raid_bdev1", 00:16:00.092 "uuid": "26b4eb6e-b4e3-4feb-acf3-6236bd3d230f", 00:16:00.092 "strip_size_kb": 0, 00:16:00.092 "state": "online", 00:16:00.092 "raid_level": "raid1", 00:16:00.092 "superblock": true, 00:16:00.092 "num_base_bdevs": 4, 00:16:00.092 "num_base_bdevs_discovered": 3, 00:16:00.092 "num_base_bdevs_operational": 3, 00:16:00.092 "process": { 00:16:00.092 "type": "rebuild", 00:16:00.093 "target": "spare", 00:16:00.093 "progress": { 00:16:00.093 "blocks": 14336, 00:16:00.093 "percent": 22 00:16:00.093 } 00:16:00.093 }, 00:16:00.093 "base_bdevs_list": [ 00:16:00.093 { 00:16:00.093 "name": "spare", 00:16:00.093 "uuid": "a9d46fb6-d712-5eec-a5d9-af0d0693fe73", 00:16:00.093 "is_configured": true, 00:16:00.093 "data_offset": 2048, 00:16:00.093 "data_size": 63488 00:16:00.093 }, 00:16:00.093 { 00:16:00.093 "name": null, 00:16:00.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.093 "is_configured": false, 00:16:00.093 "data_offset": 0, 00:16:00.093 "data_size": 63488 00:16:00.093 }, 00:16:00.093 { 00:16:00.093 "name": "BaseBdev3", 00:16:00.093 "uuid": "72a7fb2d-dcbf-5a48-b045-593b88133d87", 00:16:00.093 "is_configured": true, 00:16:00.093 "data_offset": 2048, 00:16:00.093 "data_size": 63488 00:16:00.093 }, 00:16:00.093 { 00:16:00.093 "name": "BaseBdev4", 00:16:00.093 "uuid": "dbfd61a9-c7d7-59ff-ad07-8ba4151d29a7", 00:16:00.093 "is_configured": true, 00:16:00.093 "data_offset": 2048, 00:16:00.093 "data_size": 63488 00:16:00.093 } 00:16:00.093 ] 00:16:00.093 }' 00:16:00.093 11:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.093 11:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.093 11:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.093 [2024-11-20 11:25:43.105599] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:00.093 [2024-11-20 11:25:43.106288] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:00.093 11:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.093 11:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=509 00:16:00.093 11:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:00.093 11:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.093 11:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.093 11:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.093 11:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.093 11:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.093 11:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.093 11:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.093 11:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.093 11:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.093 11:25:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.093 11:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.093 "name": "raid_bdev1", 00:16:00.093 "uuid": "26b4eb6e-b4e3-4feb-acf3-6236bd3d230f", 00:16:00.093 "strip_size_kb": 0, 00:16:00.093 "state": "online", 00:16:00.093 "raid_level": "raid1", 00:16:00.093 "superblock": true, 00:16:00.093 "num_base_bdevs": 4, 00:16:00.093 "num_base_bdevs_discovered": 3, 00:16:00.093 "num_base_bdevs_operational": 3, 00:16:00.093 "process": { 00:16:00.093 "type": "rebuild", 00:16:00.093 "target": "spare", 00:16:00.093 "progress": { 00:16:00.093 "blocks": 16384, 00:16:00.093 "percent": 25 00:16:00.093 } 00:16:00.093 }, 00:16:00.093 "base_bdevs_list": [ 00:16:00.093 { 00:16:00.093 "name": "spare", 00:16:00.093 "uuid": "a9d46fb6-d712-5eec-a5d9-af0d0693fe73", 00:16:00.093 "is_configured": true, 00:16:00.093 "data_offset": 2048, 00:16:00.093 "data_size": 63488 00:16:00.093 }, 00:16:00.093 { 00:16:00.093 "name": null, 00:16:00.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.093 "is_configured": false, 00:16:00.093 "data_offset": 0, 00:16:00.093 "data_size": 63488 00:16:00.093 }, 00:16:00.093 { 00:16:00.093 "name": "BaseBdev3", 00:16:00.093 "uuid": "72a7fb2d-dcbf-5a48-b045-593b88133d87", 00:16:00.093 "is_configured": true, 00:16:00.093 "data_offset": 2048, 00:16:00.093 "data_size": 63488 00:16:00.093 }, 00:16:00.093 { 00:16:00.093 "name": "BaseBdev4", 00:16:00.093 "uuid": "dbfd61a9-c7d7-59ff-ad07-8ba4151d29a7", 00:16:00.093 "is_configured": true, 00:16:00.093 "data_offset": 2048, 00:16:00.093 "data_size": 63488 00:16:00.093 } 00:16:00.093 ] 00:16:00.093 }' 00:16:00.093 11:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.352 11:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.352 11:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.352 11:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.352 11:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:00.611 103.50 IOPS, 310.50 MiB/s [2024-11-20T11:25:43.727Z] [2024-11-20 11:25:43.475191] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:16:00.611 [2024-11-20 11:25:43.702407] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:01.178 [2024-11-20 11:25:44.064676] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:01.178 [2024-11-20 11:25:44.065288] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:01.178 [2024-11-20 11:25:44.275754] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:01.178 [2024-11-20 11:25:44.276132] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:01.178 11:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:01.178 11:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.178 11:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.178 11:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.178 11:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.178 11:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.178 11:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.178 11:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.178 11:25:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.178 11:25:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.437 11:25:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.437 11:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.437 "name": "raid_bdev1", 00:16:01.437 "uuid": "26b4eb6e-b4e3-4feb-acf3-6236bd3d230f", 00:16:01.437 "strip_size_kb": 0, 00:16:01.437 "state": "online", 00:16:01.437 "raid_level": "raid1", 00:16:01.437 "superblock": true, 00:16:01.437 "num_base_bdevs": 4, 00:16:01.437 "num_base_bdevs_discovered": 3, 00:16:01.437 "num_base_bdevs_operational": 3, 00:16:01.437 "process": { 00:16:01.437 "type": "rebuild", 00:16:01.437 "target": "spare", 00:16:01.437 "progress": { 00:16:01.437 "blocks": 28672, 00:16:01.437 "percent": 45 00:16:01.437 } 00:16:01.437 }, 00:16:01.437 "base_bdevs_list": [ 00:16:01.437 { 00:16:01.437 "name": "spare", 00:16:01.437 "uuid": "a9d46fb6-d712-5eec-a5d9-af0d0693fe73", 00:16:01.437 "is_configured": true, 00:16:01.437 "data_offset": 2048, 00:16:01.437 "data_size": 63488 00:16:01.437 }, 00:16:01.437 { 00:16:01.437 "name": null, 00:16:01.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.437 "is_configured": false, 00:16:01.437 "data_offset": 0, 00:16:01.437 "data_size": 63488 00:16:01.437 }, 00:16:01.438 { 00:16:01.438 "name": "BaseBdev3", 00:16:01.438 "uuid": "72a7fb2d-dcbf-5a48-b045-593b88133d87", 00:16:01.438 "is_configured": true, 00:16:01.438 "data_offset": 2048, 00:16:01.438 "data_size": 63488 00:16:01.438 }, 00:16:01.438 { 00:16:01.438 "name": "BaseBdev4", 00:16:01.438 "uuid": "dbfd61a9-c7d7-59ff-ad07-8ba4151d29a7", 00:16:01.438 "is_configured": true, 00:16:01.438 "data_offset": 2048, 00:16:01.438 "data_size": 63488 00:16:01.438 } 00:16:01.438 ] 00:16:01.438 }' 00:16:01.438 11:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.438 92.40 IOPS, 277.20 MiB/s [2024-11-20T11:25:44.554Z] 11:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.438 11:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.438 11:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.438 11:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:02.004 [2024-11-20 11:25:44.987476] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:02.304 [2024-11-20 11:25:45.189812] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:02.304 [2024-11-20 11:25:45.190298] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:02.562 84.17 IOPS, 252.50 MiB/s [2024-11-20T11:25:45.678Z] 11:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:02.562 11:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.562 11:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.562 11:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.562 11:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.562 11:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.562 11:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.562 11:25:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.562 11:25:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.562 11:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.562 11:25:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.562 11:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.562 "name": "raid_bdev1", 00:16:02.562 "uuid": "26b4eb6e-b4e3-4feb-acf3-6236bd3d230f", 00:16:02.562 "strip_size_kb": 0, 00:16:02.562 "state": "online", 00:16:02.562 "raid_level": "raid1", 00:16:02.562 "superblock": true, 00:16:02.562 "num_base_bdevs": 4, 00:16:02.562 "num_base_bdevs_discovered": 3, 00:16:02.562 "num_base_bdevs_operational": 3, 00:16:02.562 "process": { 00:16:02.562 "type": "rebuild", 00:16:02.562 "target": "spare", 00:16:02.562 "progress": { 00:16:02.562 "blocks": 45056, 00:16:02.562 "percent": 70 00:16:02.562 } 00:16:02.562 }, 00:16:02.562 "base_bdevs_list": [ 00:16:02.562 { 00:16:02.562 "name": "spare", 00:16:02.562 "uuid": "a9d46fb6-d712-5eec-a5d9-af0d0693fe73", 00:16:02.562 "is_configured": true, 00:16:02.562 "data_offset": 2048, 00:16:02.562 "data_size": 63488 00:16:02.562 }, 00:16:02.562 { 00:16:02.562 "name": null, 00:16:02.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.562 "is_configured": false, 00:16:02.562 "data_offset": 0, 00:16:02.562 "data_size": 63488 00:16:02.562 }, 00:16:02.562 { 00:16:02.562 "name": "BaseBdev3", 00:16:02.562 "uuid": "72a7fb2d-dcbf-5a48-b045-593b88133d87", 00:16:02.562 "is_configured": true, 00:16:02.562 "data_offset": 2048, 00:16:02.562 "data_size": 63488 00:16:02.562 }, 00:16:02.562 { 00:16:02.562 "name": "BaseBdev4", 00:16:02.562 "uuid": "dbfd61a9-c7d7-59ff-ad07-8ba4151d29a7", 00:16:02.562 "is_configured": true, 00:16:02.562 "data_offset": 2048, 00:16:02.562 "data_size": 63488 00:16:02.562 } 00:16:02.562 ] 00:16:02.562 }' 00:16:02.562 11:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.562 11:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.562 11:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.562 11:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.562 11:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:03.130 [2024-11-20 11:25:46.030285] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:16:03.390 77.29 IOPS, 231.86 MiB/s [2024-11-20T11:25:46.506Z] [2024-11-20 11:25:46.463740] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:03.649 [2024-11-20 11:25:46.553009] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:03.649 [2024-11-20 11:25:46.556492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.649 "name": "raid_bdev1", 00:16:03.649 "uuid": "26b4eb6e-b4e3-4feb-acf3-6236bd3d230f", 00:16:03.649 "strip_size_kb": 0, 00:16:03.649 "state": "online", 00:16:03.649 "raid_level": "raid1", 00:16:03.649 "superblock": true, 00:16:03.649 "num_base_bdevs": 4, 00:16:03.649 "num_base_bdevs_discovered": 3, 00:16:03.649 "num_base_bdevs_operational": 3, 00:16:03.649 "base_bdevs_list": [ 00:16:03.649 { 00:16:03.649 "name": "spare", 00:16:03.649 "uuid": "a9d46fb6-d712-5eec-a5d9-af0d0693fe73", 00:16:03.649 "is_configured": true, 00:16:03.649 "data_offset": 2048, 00:16:03.649 "data_size": 63488 00:16:03.649 }, 00:16:03.649 { 00:16:03.649 "name": null, 00:16:03.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.649 "is_configured": false, 00:16:03.649 "data_offset": 0, 00:16:03.649 "data_size": 63488 00:16:03.649 }, 00:16:03.649 { 00:16:03.649 "name": "BaseBdev3", 00:16:03.649 "uuid": "72a7fb2d-dcbf-5a48-b045-593b88133d87", 00:16:03.649 "is_configured": true, 00:16:03.649 "data_offset": 2048, 00:16:03.649 "data_size": 63488 00:16:03.649 }, 00:16:03.649 { 00:16:03.649 "name": "BaseBdev4", 00:16:03.649 "uuid": "dbfd61a9-c7d7-59ff-ad07-8ba4151d29a7", 00:16:03.649 "is_configured": true, 00:16:03.649 "data_offset": 2048, 00:16:03.649 "data_size": 63488 00:16:03.649 } 00:16:03.649 ] 00:16:03.649 }' 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.649 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.908 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.908 "name": "raid_bdev1", 00:16:03.908 "uuid": "26b4eb6e-b4e3-4feb-acf3-6236bd3d230f", 00:16:03.908 "strip_size_kb": 0, 00:16:03.908 "state": "online", 00:16:03.908 "raid_level": "raid1", 00:16:03.908 "superblock": true, 00:16:03.908 "num_base_bdevs": 4, 00:16:03.908 "num_base_bdevs_discovered": 3, 00:16:03.908 "num_base_bdevs_operational": 3, 00:16:03.908 "base_bdevs_list": [ 00:16:03.908 { 00:16:03.908 "name": "spare", 00:16:03.908 "uuid": "a9d46fb6-d712-5eec-a5d9-af0d0693fe73", 00:16:03.908 "is_configured": true, 00:16:03.908 "data_offset": 2048, 00:16:03.908 "data_size": 63488 00:16:03.908 }, 00:16:03.909 { 00:16:03.909 "name": null, 00:16:03.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.909 "is_configured": false, 00:16:03.909 "data_offset": 0, 00:16:03.909 "data_size": 63488 00:16:03.909 }, 00:16:03.909 { 00:16:03.909 "name": "BaseBdev3", 00:16:03.909 "uuid": "72a7fb2d-dcbf-5a48-b045-593b88133d87", 00:16:03.909 "is_configured": true, 00:16:03.909 "data_offset": 2048, 00:16:03.909 "data_size": 63488 00:16:03.909 }, 00:16:03.909 { 00:16:03.909 "name": "BaseBdev4", 00:16:03.909 "uuid": "dbfd61a9-c7d7-59ff-ad07-8ba4151d29a7", 00:16:03.909 "is_configured": true, 00:16:03.909 "data_offset": 2048, 00:16:03.909 "data_size": 63488 00:16:03.909 } 00:16:03.909 ] 00:16:03.909 }' 00:16:03.909 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.909 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:03.909 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.909 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:03.909 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:03.909 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.909 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.909 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.909 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.909 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:03.909 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.909 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.909 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.909 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.909 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.909 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.909 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.909 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.909 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.909 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.909 "name": "raid_bdev1", 00:16:03.909 "uuid": "26b4eb6e-b4e3-4feb-acf3-6236bd3d230f", 00:16:03.909 "strip_size_kb": 0, 00:16:03.909 "state": "online", 00:16:03.909 "raid_level": "raid1", 00:16:03.909 "superblock": true, 00:16:03.909 "num_base_bdevs": 4, 00:16:03.909 "num_base_bdevs_discovered": 3, 00:16:03.909 "num_base_bdevs_operational": 3, 00:16:03.909 "base_bdevs_list": [ 00:16:03.909 { 00:16:03.909 "name": "spare", 00:16:03.909 "uuid": "a9d46fb6-d712-5eec-a5d9-af0d0693fe73", 00:16:03.909 "is_configured": true, 00:16:03.909 "data_offset": 2048, 00:16:03.909 "data_size": 63488 00:16:03.909 }, 00:16:03.909 { 00:16:03.909 "name": null, 00:16:03.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.909 "is_configured": false, 00:16:03.909 "data_offset": 0, 00:16:03.909 "data_size": 63488 00:16:03.909 }, 00:16:03.909 { 00:16:03.909 "name": "BaseBdev3", 00:16:03.909 "uuid": "72a7fb2d-dcbf-5a48-b045-593b88133d87", 00:16:03.909 "is_configured": true, 00:16:03.909 "data_offset": 2048, 00:16:03.909 "data_size": 63488 00:16:03.909 }, 00:16:03.909 { 00:16:03.909 "name": "BaseBdev4", 00:16:03.909 "uuid": "dbfd61a9-c7d7-59ff-ad07-8ba4151d29a7", 00:16:03.909 "is_configured": true, 00:16:03.909 "data_offset": 2048, 00:16:03.909 "data_size": 63488 00:16:03.909 } 00:16:03.909 ] 00:16:03.909 }' 00:16:03.909 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.909 11:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.476 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:04.476 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.476 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.476 72.50 IOPS, 217.50 MiB/s [2024-11-20T11:25:47.592Z] [2024-11-20 11:25:47.344034] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:04.476 [2024-11-20 11:25:47.344066] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:04.476 00:16:04.476 Latency(us) 00:16:04.476 [2024-11-20T11:25:47.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.476 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:04.476 raid_bdev1 : 8.13 71.63 214.88 0.00 0.00 17717.91 373.83 116762.83 00:16:04.476 [2024-11-20T11:25:47.592Z] =================================================================================================================== 00:16:04.476 [2024-11-20T11:25:47.592Z] Total : 71.63 214.88 0.00 0.00 17717.91 373.83 116762.83 00:16:04.476 { 00:16:04.476 "results": [ 00:16:04.476 { 00:16:04.476 "job": "raid_bdev1", 00:16:04.476 "core_mask": "0x1", 00:16:04.476 "workload": "randrw", 00:16:04.476 "percentage": 50, 00:16:04.476 "status": "finished", 00:16:04.476 "queue_depth": 2, 00:16:04.476 "io_size": 3145728, 00:16:04.476 "runtime": 8.125343, 00:16:04.476 "iops": 71.62774543794643, 00:16:04.476 "mibps": 214.8832363138393, 00:16:04.476 "io_failed": 0, 00:16:04.477 "io_timeout": 0, 00:16:04.477 "avg_latency_us": 17717.90755563559, 00:16:04.477 "min_latency_us": 373.82707423580786, 00:16:04.477 "max_latency_us": 116762.82969432314 00:16:04.477 } 00:16:04.477 ], 00:16:04.477 "core_count": 1 00:16:04.477 } 00:16:04.477 [2024-11-20 11:25:47.462649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.477 [2024-11-20 11:25:47.462721] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:04.477 [2024-11-20 11:25:47.462845] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:04.477 [2024-11-20 11:25:47.462860] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:04.477 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.477 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.477 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.477 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.477 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:04.477 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.477 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:04.477 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:04.477 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:04.477 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:04.477 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:04.477 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:04.477 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:04.477 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:04.477 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:04.477 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:04.477 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:04.477 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:04.477 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:04.736 /dev/nbd0 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:04.736 1+0 records in 00:16:04.736 1+0 records out 00:16:04.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030822 s, 13.3 MB/s 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:04.736 11:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:04.995 /dev/nbd1 00:16:04.995 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:04.995 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:04.995 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:04.995 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:04.995 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:04.995 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:05.253 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:05.253 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:05.253 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:05.253 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:05.253 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:05.253 1+0 records in 00:16:05.253 1+0 records out 00:16:05.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548309 s, 7.5 MB/s 00:16:05.253 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.253 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:05.253 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.253 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:05.253 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:05.253 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:05.253 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:05.253 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:05.253 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:05.253 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.253 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:05.253 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:05.253 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:05.253 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.253 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:05.511 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:05.511 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:05.511 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:05.511 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.511 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.511 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:05.511 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:05.511 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.511 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:05.511 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:05.511 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:05.511 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.511 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:05.511 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:05.511 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:05.511 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:05.511 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:05.511 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:05.511 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:05.511 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:05.769 /dev/nbd1 00:16:06.028 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:06.028 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:06.028 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:06.028 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:06.028 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:06.028 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:06.028 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:06.028 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:06.028 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:06.028 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:06.028 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:06.028 1+0 records in 00:16:06.028 1+0 records out 00:16:06.028 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587436 s, 7.0 MB/s 00:16:06.028 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.028 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:06.028 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.028 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:06.028 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:06.028 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:06.028 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:06.028 11:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:06.028 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:06.028 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:06.028 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:06.028 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:06.028 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:06.028 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.028 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:06.285 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:06.285 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:06.285 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:06.285 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.285 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.285 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:06.285 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:06.285 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.285 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:06.285 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:06.285 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:06.285 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:06.285 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:06.285 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.285 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:06.542 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:06.542 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:06.542 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:06.542 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.542 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.542 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:06.542 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:06.542 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.542 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:06.542 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:06.542 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.542 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.542 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.542 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:06.542 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.542 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.542 [2024-11-20 11:25:49.557628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:06.542 [2024-11-20 11:25:49.557766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.542 [2024-11-20 11:25:49.557840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:06.542 [2024-11-20 11:25:49.557892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.542 [2024-11-20 11:25:49.560510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.542 [2024-11-20 11:25:49.560606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:06.542 [2024-11-20 11:25:49.560760] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:06.542 [2024-11-20 11:25:49.560855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:06.542 [2024-11-20 11:25:49.561050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:06.542 [2024-11-20 11:25:49.561228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:06.542 spare 00:16:06.542 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.542 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:06.542 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.542 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.800 [2024-11-20 11:25:49.661186] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:06.800 [2024-11-20 11:25:49.661258] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:06.800 [2024-11-20 11:25:49.661681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:16:06.800 [2024-11-20 11:25:49.661912] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:06.800 [2024-11-20 11:25:49.661925] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:06.800 [2024-11-20 11:25:49.662194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.800 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.800 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:06.800 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.800 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.800 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.800 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.800 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.800 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.800 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.800 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.800 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.800 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.800 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.800 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.800 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.800 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.800 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.800 "name": "raid_bdev1", 00:16:06.800 "uuid": "26b4eb6e-b4e3-4feb-acf3-6236bd3d230f", 00:16:06.800 "strip_size_kb": 0, 00:16:06.800 "state": "online", 00:16:06.800 "raid_level": "raid1", 00:16:06.800 "superblock": true, 00:16:06.800 "num_base_bdevs": 4, 00:16:06.800 "num_base_bdevs_discovered": 3, 00:16:06.800 "num_base_bdevs_operational": 3, 00:16:06.800 "base_bdevs_list": [ 00:16:06.800 { 00:16:06.800 "name": "spare", 00:16:06.800 "uuid": "a9d46fb6-d712-5eec-a5d9-af0d0693fe73", 00:16:06.800 "is_configured": true, 00:16:06.800 "data_offset": 2048, 00:16:06.800 "data_size": 63488 00:16:06.800 }, 00:16:06.800 { 00:16:06.800 "name": null, 00:16:06.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.800 "is_configured": false, 00:16:06.800 "data_offset": 2048, 00:16:06.800 "data_size": 63488 00:16:06.800 }, 00:16:06.800 { 00:16:06.800 "name": "BaseBdev3", 00:16:06.800 "uuid": "72a7fb2d-dcbf-5a48-b045-593b88133d87", 00:16:06.800 "is_configured": true, 00:16:06.800 "data_offset": 2048, 00:16:06.800 "data_size": 63488 00:16:06.800 }, 00:16:06.800 { 00:16:06.800 "name": "BaseBdev4", 00:16:06.800 "uuid": "dbfd61a9-c7d7-59ff-ad07-8ba4151d29a7", 00:16:06.800 "is_configured": true, 00:16:06.800 "data_offset": 2048, 00:16:06.800 "data_size": 63488 00:16:06.800 } 00:16:06.800 ] 00:16:06.800 }' 00:16:06.800 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.800 11:25:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.059 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:07.059 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.059 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:07.059 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:07.059 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.059 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.059 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.059 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.059 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.059 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.317 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.317 "name": "raid_bdev1", 00:16:07.317 "uuid": "26b4eb6e-b4e3-4feb-acf3-6236bd3d230f", 00:16:07.317 "strip_size_kb": 0, 00:16:07.317 "state": "online", 00:16:07.318 "raid_level": "raid1", 00:16:07.318 "superblock": true, 00:16:07.318 "num_base_bdevs": 4, 00:16:07.318 "num_base_bdevs_discovered": 3, 00:16:07.318 "num_base_bdevs_operational": 3, 00:16:07.318 "base_bdevs_list": [ 00:16:07.318 { 00:16:07.318 "name": "spare", 00:16:07.318 "uuid": "a9d46fb6-d712-5eec-a5d9-af0d0693fe73", 00:16:07.318 "is_configured": true, 00:16:07.318 "data_offset": 2048, 00:16:07.318 "data_size": 63488 00:16:07.318 }, 00:16:07.318 { 00:16:07.318 "name": null, 00:16:07.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.318 "is_configured": false, 00:16:07.318 "data_offset": 2048, 00:16:07.318 "data_size": 63488 00:16:07.318 }, 00:16:07.318 { 00:16:07.318 "name": "BaseBdev3", 00:16:07.318 "uuid": "72a7fb2d-dcbf-5a48-b045-593b88133d87", 00:16:07.318 "is_configured": true, 00:16:07.318 "data_offset": 2048, 00:16:07.318 "data_size": 63488 00:16:07.318 }, 00:16:07.318 { 00:16:07.318 "name": "BaseBdev4", 00:16:07.318 "uuid": "dbfd61a9-c7d7-59ff-ad07-8ba4151d29a7", 00:16:07.318 "is_configured": true, 00:16:07.318 "data_offset": 2048, 00:16:07.318 "data_size": 63488 00:16:07.318 } 00:16:07.318 ] 00:16:07.318 }' 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.318 [2024-11-20 11:25:50.329131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.318 "name": "raid_bdev1", 00:16:07.318 "uuid": "26b4eb6e-b4e3-4feb-acf3-6236bd3d230f", 00:16:07.318 "strip_size_kb": 0, 00:16:07.318 "state": "online", 00:16:07.318 "raid_level": "raid1", 00:16:07.318 "superblock": true, 00:16:07.318 "num_base_bdevs": 4, 00:16:07.318 "num_base_bdevs_discovered": 2, 00:16:07.318 "num_base_bdevs_operational": 2, 00:16:07.318 "base_bdevs_list": [ 00:16:07.318 { 00:16:07.318 "name": null, 00:16:07.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.318 "is_configured": false, 00:16:07.318 "data_offset": 0, 00:16:07.318 "data_size": 63488 00:16:07.318 }, 00:16:07.318 { 00:16:07.318 "name": null, 00:16:07.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.318 "is_configured": false, 00:16:07.318 "data_offset": 2048, 00:16:07.318 "data_size": 63488 00:16:07.318 }, 00:16:07.318 { 00:16:07.318 "name": "BaseBdev3", 00:16:07.318 "uuid": "72a7fb2d-dcbf-5a48-b045-593b88133d87", 00:16:07.318 "is_configured": true, 00:16:07.318 "data_offset": 2048, 00:16:07.318 "data_size": 63488 00:16:07.318 }, 00:16:07.318 { 00:16:07.318 "name": "BaseBdev4", 00:16:07.318 "uuid": "dbfd61a9-c7d7-59ff-ad07-8ba4151d29a7", 00:16:07.318 "is_configured": true, 00:16:07.318 "data_offset": 2048, 00:16:07.318 "data_size": 63488 00:16:07.318 } 00:16:07.318 ] 00:16:07.318 }' 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.318 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.886 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:07.886 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.886 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.886 [2024-11-20 11:25:50.812434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.886 [2024-11-20 11:25:50.812666] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:07.886 [2024-11-20 11:25:50.812687] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:07.886 [2024-11-20 11:25:50.812732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.886 [2024-11-20 11:25:50.830439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:16:07.886 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.886 11:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:07.886 [2024-11-20 11:25:50.832744] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:08.843 11:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.843 11:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.843 11:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.843 11:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.843 11:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.843 11:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.843 11:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.843 11:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.843 11:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.843 11:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.843 11:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.843 "name": "raid_bdev1", 00:16:08.843 "uuid": "26b4eb6e-b4e3-4feb-acf3-6236bd3d230f", 00:16:08.843 "strip_size_kb": 0, 00:16:08.843 "state": "online", 00:16:08.843 "raid_level": "raid1", 00:16:08.843 "superblock": true, 00:16:08.843 "num_base_bdevs": 4, 00:16:08.843 "num_base_bdevs_discovered": 3, 00:16:08.843 "num_base_bdevs_operational": 3, 00:16:08.843 "process": { 00:16:08.843 "type": "rebuild", 00:16:08.843 "target": "spare", 00:16:08.843 "progress": { 00:16:08.843 "blocks": 20480, 00:16:08.843 "percent": 32 00:16:08.843 } 00:16:08.843 }, 00:16:08.843 "base_bdevs_list": [ 00:16:08.843 { 00:16:08.843 "name": "spare", 00:16:08.843 "uuid": "a9d46fb6-d712-5eec-a5d9-af0d0693fe73", 00:16:08.843 "is_configured": true, 00:16:08.843 "data_offset": 2048, 00:16:08.843 "data_size": 63488 00:16:08.843 }, 00:16:08.843 { 00:16:08.843 "name": null, 00:16:08.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.843 "is_configured": false, 00:16:08.843 "data_offset": 2048, 00:16:08.843 "data_size": 63488 00:16:08.843 }, 00:16:08.843 { 00:16:08.843 "name": "BaseBdev3", 00:16:08.843 "uuid": "72a7fb2d-dcbf-5a48-b045-593b88133d87", 00:16:08.843 "is_configured": true, 00:16:08.843 "data_offset": 2048, 00:16:08.843 "data_size": 63488 00:16:08.843 }, 00:16:08.843 { 00:16:08.843 "name": "BaseBdev4", 00:16:08.843 "uuid": "dbfd61a9-c7d7-59ff-ad07-8ba4151d29a7", 00:16:08.843 "is_configured": true, 00:16:08.843 "data_offset": 2048, 00:16:08.843 "data_size": 63488 00:16:08.843 } 00:16:08.843 ] 00:16:08.843 }' 00:16:08.843 11:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.843 11:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.843 11:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.115 11:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.115 11:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:09.115 11:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.115 11:25:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.115 [2024-11-20 11:25:51.991940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.115 [2024-11-20 11:25:52.038923] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:09.115 [2024-11-20 11:25:52.039037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.115 [2024-11-20 11:25:52.039057] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.115 [2024-11-20 11:25:52.039067] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:09.115 11:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.115 11:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:09.115 11:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.115 11:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.115 11:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:09.115 11:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:09.115 11:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:09.115 11:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.115 11:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.115 11:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.115 11:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.115 11:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.115 11:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.115 11:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.115 11:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.115 11:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.115 11:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.115 "name": "raid_bdev1", 00:16:09.115 "uuid": "26b4eb6e-b4e3-4feb-acf3-6236bd3d230f", 00:16:09.115 "strip_size_kb": 0, 00:16:09.115 "state": "online", 00:16:09.115 "raid_level": "raid1", 00:16:09.115 "superblock": true, 00:16:09.115 "num_base_bdevs": 4, 00:16:09.115 "num_base_bdevs_discovered": 2, 00:16:09.115 "num_base_bdevs_operational": 2, 00:16:09.115 "base_bdevs_list": [ 00:16:09.115 { 00:16:09.115 "name": null, 00:16:09.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.115 "is_configured": false, 00:16:09.115 "data_offset": 0, 00:16:09.115 "data_size": 63488 00:16:09.115 }, 00:16:09.115 { 00:16:09.115 "name": null, 00:16:09.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.115 "is_configured": false, 00:16:09.115 "data_offset": 2048, 00:16:09.115 "data_size": 63488 00:16:09.115 }, 00:16:09.115 { 00:16:09.115 "name": "BaseBdev3", 00:16:09.115 "uuid": "72a7fb2d-dcbf-5a48-b045-593b88133d87", 00:16:09.115 "is_configured": true, 00:16:09.115 "data_offset": 2048, 00:16:09.115 "data_size": 63488 00:16:09.115 }, 00:16:09.115 { 00:16:09.115 "name": "BaseBdev4", 00:16:09.115 "uuid": "dbfd61a9-c7d7-59ff-ad07-8ba4151d29a7", 00:16:09.115 "is_configured": true, 00:16:09.115 "data_offset": 2048, 00:16:09.115 "data_size": 63488 00:16:09.115 } 00:16:09.115 ] 00:16:09.115 }' 00:16:09.115 11:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.115 11:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.682 11:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:09.682 11:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.682 11:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.682 [2024-11-20 11:25:52.497629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:09.682 [2024-11-20 11:25:52.497787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.682 [2024-11-20 11:25:52.497840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:09.682 [2024-11-20 11:25:52.497900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.682 [2024-11-20 11:25:52.498507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.682 [2024-11-20 11:25:52.498581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:09.682 [2024-11-20 11:25:52.498719] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:09.682 [2024-11-20 11:25:52.498769] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:09.682 [2024-11-20 11:25:52.498818] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:09.682 [2024-11-20 11:25:52.498904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:09.682 [2024-11-20 11:25:52.515633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:16:09.682 spare 00:16:09.682 11:25:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.682 11:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:09.682 [2024-11-20 11:25:52.517914] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:10.616 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.616 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.616 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.616 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.616 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.616 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.616 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.616 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.616 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.616 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.616 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.616 "name": "raid_bdev1", 00:16:10.616 "uuid": "26b4eb6e-b4e3-4feb-acf3-6236bd3d230f", 00:16:10.616 "strip_size_kb": 0, 00:16:10.616 "state": "online", 00:16:10.616 "raid_level": "raid1", 00:16:10.616 "superblock": true, 00:16:10.616 "num_base_bdevs": 4, 00:16:10.616 "num_base_bdevs_discovered": 3, 00:16:10.616 "num_base_bdevs_operational": 3, 00:16:10.616 "process": { 00:16:10.616 "type": "rebuild", 00:16:10.616 "target": "spare", 00:16:10.616 "progress": { 00:16:10.616 "blocks": 20480, 00:16:10.616 "percent": 32 00:16:10.616 } 00:16:10.616 }, 00:16:10.616 "base_bdevs_list": [ 00:16:10.616 { 00:16:10.616 "name": "spare", 00:16:10.616 "uuid": "a9d46fb6-d712-5eec-a5d9-af0d0693fe73", 00:16:10.616 "is_configured": true, 00:16:10.616 "data_offset": 2048, 00:16:10.616 "data_size": 63488 00:16:10.616 }, 00:16:10.616 { 00:16:10.616 "name": null, 00:16:10.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.616 "is_configured": false, 00:16:10.616 "data_offset": 2048, 00:16:10.616 "data_size": 63488 00:16:10.616 }, 00:16:10.616 { 00:16:10.616 "name": "BaseBdev3", 00:16:10.616 "uuid": "72a7fb2d-dcbf-5a48-b045-593b88133d87", 00:16:10.616 "is_configured": true, 00:16:10.616 "data_offset": 2048, 00:16:10.616 "data_size": 63488 00:16:10.616 }, 00:16:10.616 { 00:16:10.616 "name": "BaseBdev4", 00:16:10.616 "uuid": "dbfd61a9-c7d7-59ff-ad07-8ba4151d29a7", 00:16:10.616 "is_configured": true, 00:16:10.616 "data_offset": 2048, 00:16:10.616 "data_size": 63488 00:16:10.616 } 00:16:10.616 ] 00:16:10.616 }' 00:16:10.616 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.616 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.616 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.616 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.616 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:10.616 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.616 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.616 [2024-11-20 11:25:53.657208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.616 [2024-11-20 11:25:53.724130] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:10.616 [2024-11-20 11:25:53.724377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.616 [2024-11-20 11:25:53.724467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.616 [2024-11-20 11:25:53.724483] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:10.875 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.875 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:10.875 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.875 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.875 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.875 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.875 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:10.875 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.875 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.875 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.875 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.875 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.875 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.875 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.875 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.875 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.875 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.875 "name": "raid_bdev1", 00:16:10.875 "uuid": "26b4eb6e-b4e3-4feb-acf3-6236bd3d230f", 00:16:10.875 "strip_size_kb": 0, 00:16:10.875 "state": "online", 00:16:10.875 "raid_level": "raid1", 00:16:10.875 "superblock": true, 00:16:10.875 "num_base_bdevs": 4, 00:16:10.875 "num_base_bdevs_discovered": 2, 00:16:10.875 "num_base_bdevs_operational": 2, 00:16:10.875 "base_bdevs_list": [ 00:16:10.875 { 00:16:10.875 "name": null, 00:16:10.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.875 "is_configured": false, 00:16:10.875 "data_offset": 0, 00:16:10.875 "data_size": 63488 00:16:10.875 }, 00:16:10.875 { 00:16:10.875 "name": null, 00:16:10.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.875 "is_configured": false, 00:16:10.875 "data_offset": 2048, 00:16:10.875 "data_size": 63488 00:16:10.875 }, 00:16:10.875 { 00:16:10.875 "name": "BaseBdev3", 00:16:10.875 "uuid": "72a7fb2d-dcbf-5a48-b045-593b88133d87", 00:16:10.875 "is_configured": true, 00:16:10.875 "data_offset": 2048, 00:16:10.875 "data_size": 63488 00:16:10.875 }, 00:16:10.875 { 00:16:10.875 "name": "BaseBdev4", 00:16:10.875 "uuid": "dbfd61a9-c7d7-59ff-ad07-8ba4151d29a7", 00:16:10.875 "is_configured": true, 00:16:10.875 "data_offset": 2048, 00:16:10.875 "data_size": 63488 00:16:10.875 } 00:16:10.875 ] 00:16:10.875 }' 00:16:10.875 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.875 11:25:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.134 11:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:11.134 11:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.134 11:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:11.134 11:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:11.134 11:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.134 11:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.134 11:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.134 11:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.134 11:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.134 11:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.134 11:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.134 "name": "raid_bdev1", 00:16:11.134 "uuid": "26b4eb6e-b4e3-4feb-acf3-6236bd3d230f", 00:16:11.134 "strip_size_kb": 0, 00:16:11.134 "state": "online", 00:16:11.134 "raid_level": "raid1", 00:16:11.134 "superblock": true, 00:16:11.134 "num_base_bdevs": 4, 00:16:11.134 "num_base_bdevs_discovered": 2, 00:16:11.134 "num_base_bdevs_operational": 2, 00:16:11.134 "base_bdevs_list": [ 00:16:11.134 { 00:16:11.134 "name": null, 00:16:11.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.134 "is_configured": false, 00:16:11.134 "data_offset": 0, 00:16:11.134 "data_size": 63488 00:16:11.134 }, 00:16:11.134 { 00:16:11.134 "name": null, 00:16:11.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.134 "is_configured": false, 00:16:11.134 "data_offset": 2048, 00:16:11.134 "data_size": 63488 00:16:11.134 }, 00:16:11.134 { 00:16:11.134 "name": "BaseBdev3", 00:16:11.134 "uuid": "72a7fb2d-dcbf-5a48-b045-593b88133d87", 00:16:11.134 "is_configured": true, 00:16:11.134 "data_offset": 2048, 00:16:11.134 "data_size": 63488 00:16:11.134 }, 00:16:11.134 { 00:16:11.134 "name": "BaseBdev4", 00:16:11.134 "uuid": "dbfd61a9-c7d7-59ff-ad07-8ba4151d29a7", 00:16:11.134 "is_configured": true, 00:16:11.134 "data_offset": 2048, 00:16:11.134 "data_size": 63488 00:16:11.134 } 00:16:11.134 ] 00:16:11.134 }' 00:16:11.134 11:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.392 11:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:11.392 11:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.392 11:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:11.392 11:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:11.392 11:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.392 11:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.392 11:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.392 11:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:11.392 11:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.392 11:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.392 [2024-11-20 11:25:54.346554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:11.392 [2024-11-20 11:25:54.346618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.392 [2024-11-20 11:25:54.346644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:16:11.392 [2024-11-20 11:25:54.346655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.392 [2024-11-20 11:25:54.347164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.392 [2024-11-20 11:25:54.347183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:11.392 [2024-11-20 11:25:54.347276] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:11.392 [2024-11-20 11:25:54.347292] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:11.392 [2024-11-20 11:25:54.347309] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:11.392 [2024-11-20 11:25:54.347320] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:11.392 BaseBdev1 00:16:11.392 11:25:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.392 11:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:12.353 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:12.353 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.353 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.353 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.353 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.353 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:12.354 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.354 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.354 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.354 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.354 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.354 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.354 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.354 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.354 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.354 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.354 "name": "raid_bdev1", 00:16:12.354 "uuid": "26b4eb6e-b4e3-4feb-acf3-6236bd3d230f", 00:16:12.354 "strip_size_kb": 0, 00:16:12.354 "state": "online", 00:16:12.354 "raid_level": "raid1", 00:16:12.354 "superblock": true, 00:16:12.354 "num_base_bdevs": 4, 00:16:12.354 "num_base_bdevs_discovered": 2, 00:16:12.354 "num_base_bdevs_operational": 2, 00:16:12.354 "base_bdevs_list": [ 00:16:12.354 { 00:16:12.354 "name": null, 00:16:12.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.354 "is_configured": false, 00:16:12.354 "data_offset": 0, 00:16:12.354 "data_size": 63488 00:16:12.354 }, 00:16:12.354 { 00:16:12.354 "name": null, 00:16:12.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.354 "is_configured": false, 00:16:12.354 "data_offset": 2048, 00:16:12.354 "data_size": 63488 00:16:12.354 }, 00:16:12.354 { 00:16:12.354 "name": "BaseBdev3", 00:16:12.354 "uuid": "72a7fb2d-dcbf-5a48-b045-593b88133d87", 00:16:12.354 "is_configured": true, 00:16:12.354 "data_offset": 2048, 00:16:12.354 "data_size": 63488 00:16:12.354 }, 00:16:12.354 { 00:16:12.354 "name": "BaseBdev4", 00:16:12.354 "uuid": "dbfd61a9-c7d7-59ff-ad07-8ba4151d29a7", 00:16:12.354 "is_configured": true, 00:16:12.354 "data_offset": 2048, 00:16:12.354 "data_size": 63488 00:16:12.354 } 00:16:12.354 ] 00:16:12.354 }' 00:16:12.354 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.354 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.931 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.931 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.931 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.931 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.931 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.931 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.931 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.931 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.931 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.931 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.931 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.931 "name": "raid_bdev1", 00:16:12.931 "uuid": "26b4eb6e-b4e3-4feb-acf3-6236bd3d230f", 00:16:12.931 "strip_size_kb": 0, 00:16:12.931 "state": "online", 00:16:12.931 "raid_level": "raid1", 00:16:12.931 "superblock": true, 00:16:12.931 "num_base_bdevs": 4, 00:16:12.931 "num_base_bdevs_discovered": 2, 00:16:12.931 "num_base_bdevs_operational": 2, 00:16:12.931 "base_bdevs_list": [ 00:16:12.931 { 00:16:12.931 "name": null, 00:16:12.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.931 "is_configured": false, 00:16:12.931 "data_offset": 0, 00:16:12.931 "data_size": 63488 00:16:12.931 }, 00:16:12.931 { 00:16:12.931 "name": null, 00:16:12.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.931 "is_configured": false, 00:16:12.931 "data_offset": 2048, 00:16:12.931 "data_size": 63488 00:16:12.931 }, 00:16:12.931 { 00:16:12.931 "name": "BaseBdev3", 00:16:12.931 "uuid": "72a7fb2d-dcbf-5a48-b045-593b88133d87", 00:16:12.931 "is_configured": true, 00:16:12.931 "data_offset": 2048, 00:16:12.931 "data_size": 63488 00:16:12.931 }, 00:16:12.931 { 00:16:12.931 "name": "BaseBdev4", 00:16:12.931 "uuid": "dbfd61a9-c7d7-59ff-ad07-8ba4151d29a7", 00:16:12.931 "is_configured": true, 00:16:12.931 "data_offset": 2048, 00:16:12.931 "data_size": 63488 00:16:12.931 } 00:16:12.931 ] 00:16:12.931 }' 00:16:12.931 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.932 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.932 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.932 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:12.932 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:12.932 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:16:12.932 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:12.932 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:12.932 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:12.932 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:12.932 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:12.932 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:12.932 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.932 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.932 [2024-11-20 11:25:55.988219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:12.932 [2024-11-20 11:25:55.988410] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:12.932 [2024-11-20 11:25:55.988428] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:12.932 request: 00:16:12.932 { 00:16:12.932 "base_bdev": "BaseBdev1", 00:16:12.932 "raid_bdev": "raid_bdev1", 00:16:12.932 "method": "bdev_raid_add_base_bdev", 00:16:12.932 "req_id": 1 00:16:12.932 } 00:16:12.932 Got JSON-RPC error response 00:16:12.932 response: 00:16:12.932 { 00:16:12.932 "code": -22, 00:16:12.932 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:12.932 } 00:16:12.932 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:12.932 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:16:12.932 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:12.932 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:12.932 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:12.932 11:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:14.307 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:14.307 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.307 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.307 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.307 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.307 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:14.307 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.307 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.307 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.307 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.307 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.307 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.307 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.307 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.307 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.307 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.307 "name": "raid_bdev1", 00:16:14.307 "uuid": "26b4eb6e-b4e3-4feb-acf3-6236bd3d230f", 00:16:14.307 "strip_size_kb": 0, 00:16:14.307 "state": "online", 00:16:14.307 "raid_level": "raid1", 00:16:14.307 "superblock": true, 00:16:14.307 "num_base_bdevs": 4, 00:16:14.307 "num_base_bdevs_discovered": 2, 00:16:14.307 "num_base_bdevs_operational": 2, 00:16:14.307 "base_bdevs_list": [ 00:16:14.307 { 00:16:14.307 "name": null, 00:16:14.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.307 "is_configured": false, 00:16:14.307 "data_offset": 0, 00:16:14.307 "data_size": 63488 00:16:14.307 }, 00:16:14.307 { 00:16:14.307 "name": null, 00:16:14.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.307 "is_configured": false, 00:16:14.307 "data_offset": 2048, 00:16:14.307 "data_size": 63488 00:16:14.307 }, 00:16:14.307 { 00:16:14.307 "name": "BaseBdev3", 00:16:14.307 "uuid": "72a7fb2d-dcbf-5a48-b045-593b88133d87", 00:16:14.307 "is_configured": true, 00:16:14.307 "data_offset": 2048, 00:16:14.308 "data_size": 63488 00:16:14.308 }, 00:16:14.308 { 00:16:14.308 "name": "BaseBdev4", 00:16:14.308 "uuid": "dbfd61a9-c7d7-59ff-ad07-8ba4151d29a7", 00:16:14.308 "is_configured": true, 00:16:14.308 "data_offset": 2048, 00:16:14.308 "data_size": 63488 00:16:14.308 } 00:16:14.308 ] 00:16:14.308 }' 00:16:14.308 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.308 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.308 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.308 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.308 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.308 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.308 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.308 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.308 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.308 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.308 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.579 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.579 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.579 "name": "raid_bdev1", 00:16:14.579 "uuid": "26b4eb6e-b4e3-4feb-acf3-6236bd3d230f", 00:16:14.579 "strip_size_kb": 0, 00:16:14.579 "state": "online", 00:16:14.579 "raid_level": "raid1", 00:16:14.579 "superblock": true, 00:16:14.579 "num_base_bdevs": 4, 00:16:14.579 "num_base_bdevs_discovered": 2, 00:16:14.579 "num_base_bdevs_operational": 2, 00:16:14.579 "base_bdevs_list": [ 00:16:14.579 { 00:16:14.579 "name": null, 00:16:14.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.579 "is_configured": false, 00:16:14.579 "data_offset": 0, 00:16:14.579 "data_size": 63488 00:16:14.579 }, 00:16:14.579 { 00:16:14.579 "name": null, 00:16:14.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.579 "is_configured": false, 00:16:14.579 "data_offset": 2048, 00:16:14.579 "data_size": 63488 00:16:14.579 }, 00:16:14.579 { 00:16:14.579 "name": "BaseBdev3", 00:16:14.579 "uuid": "72a7fb2d-dcbf-5a48-b045-593b88133d87", 00:16:14.579 "is_configured": true, 00:16:14.579 "data_offset": 2048, 00:16:14.579 "data_size": 63488 00:16:14.579 }, 00:16:14.579 { 00:16:14.579 "name": "BaseBdev4", 00:16:14.579 "uuid": "dbfd61a9-c7d7-59ff-ad07-8ba4151d29a7", 00:16:14.579 "is_configured": true, 00:16:14.579 "data_offset": 2048, 00:16:14.579 "data_size": 63488 00:16:14.579 } 00:16:14.579 ] 00:16:14.579 }' 00:16:14.579 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.579 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:14.579 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.579 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:14.579 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79325 00:16:14.579 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79325 ']' 00:16:14.579 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79325 00:16:14.579 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:16:14.579 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:14.579 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79325 00:16:14.579 killing process with pid 79325 00:16:14.579 Received shutdown signal, test time was about 18.288342 seconds 00:16:14.579 00:16:14.579 Latency(us) 00:16:14.579 [2024-11-20T11:25:57.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.579 [2024-11-20T11:25:57.695Z] =================================================================================================================== 00:16:14.579 [2024-11-20T11:25:57.695Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:14.579 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:14.579 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:14.579 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79325' 00:16:14.579 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79325 00:16:14.579 11:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79325 00:16:14.579 [2024-11-20 11:25:57.578113] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:14.579 [2024-11-20 11:25:57.578267] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.579 [2024-11-20 11:25:57.578403] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.579 [2024-11-20 11:25:57.578422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:15.148 [2024-11-20 11:25:58.054648] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:16.525 11:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:16.525 00:16:16.525 real 0m22.157s 00:16:16.525 user 0m29.146s 00:16:16.525 sys 0m2.648s 00:16:16.525 11:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.525 11:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.525 ************************************ 00:16:16.525 END TEST raid_rebuild_test_sb_io 00:16:16.525 ************************************ 00:16:16.525 11:25:59 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:16.525 11:25:59 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:16:16.525 11:25:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:16.525 11:25:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.525 11:25:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:16.525 ************************************ 00:16:16.525 START TEST raid5f_state_function_test 00:16:16.525 ************************************ 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80058 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80058' 00:16:16.525 Process raid pid: 80058 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80058 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80058 ']' 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:16.525 11:25:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.525 [2024-11-20 11:25:59.502223] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:16:16.525 [2024-11-20 11:25:59.502450] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.783 [2024-11-20 11:25:59.677337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.784 [2024-11-20 11:25:59.806348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.042 [2024-11-20 11:26:00.049573] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.042 [2024-11-20 11:26:00.049617] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.300 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.300 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:17.300 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:17.300 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.300 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.300 [2024-11-20 11:26:00.410153] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:17.300 [2024-11-20 11:26:00.410307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:17.300 [2024-11-20 11:26:00.410342] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:17.300 [2024-11-20 11:26:00.410354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:17.300 [2024-11-20 11:26:00.410362] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:17.300 [2024-11-20 11:26:00.410373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:17.559 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.559 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:17.559 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.559 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.559 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.559 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.559 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:17.559 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.559 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.559 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.559 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.559 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.559 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.559 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.559 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.559 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.559 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.559 "name": "Existed_Raid", 00:16:17.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.559 "strip_size_kb": 64, 00:16:17.559 "state": "configuring", 00:16:17.559 "raid_level": "raid5f", 00:16:17.559 "superblock": false, 00:16:17.559 "num_base_bdevs": 3, 00:16:17.559 "num_base_bdevs_discovered": 0, 00:16:17.559 "num_base_bdevs_operational": 3, 00:16:17.559 "base_bdevs_list": [ 00:16:17.559 { 00:16:17.559 "name": "BaseBdev1", 00:16:17.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.559 "is_configured": false, 00:16:17.559 "data_offset": 0, 00:16:17.559 "data_size": 0 00:16:17.559 }, 00:16:17.559 { 00:16:17.559 "name": "BaseBdev2", 00:16:17.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.559 "is_configured": false, 00:16:17.559 "data_offset": 0, 00:16:17.559 "data_size": 0 00:16:17.559 }, 00:16:17.559 { 00:16:17.559 "name": "BaseBdev3", 00:16:17.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.559 "is_configured": false, 00:16:17.559 "data_offset": 0, 00:16:17.559 "data_size": 0 00:16:17.559 } 00:16:17.559 ] 00:16:17.559 }' 00:16:17.559 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.559 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.818 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:17.818 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.818 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.818 [2024-11-20 11:26:00.893273] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:17.818 [2024-11-20 11:26:00.893383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:17.818 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.818 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:17.818 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.818 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.818 [2024-11-20 11:26:00.905273] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:17.818 [2024-11-20 11:26:00.905394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:17.818 [2024-11-20 11:26:00.905437] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:17.818 [2024-11-20 11:26:00.905483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:17.818 [2024-11-20 11:26:00.905508] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:17.818 [2024-11-20 11:26:00.905587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:17.818 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.819 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:17.819 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.819 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.078 [2024-11-20 11:26:00.955945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.078 BaseBdev1 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.078 [ 00:16:18.078 { 00:16:18.078 "name": "BaseBdev1", 00:16:18.078 "aliases": [ 00:16:18.078 "548b3020-fa63-4ec8-a029-0d0d1448964f" 00:16:18.078 ], 00:16:18.078 "product_name": "Malloc disk", 00:16:18.078 "block_size": 512, 00:16:18.078 "num_blocks": 65536, 00:16:18.078 "uuid": "548b3020-fa63-4ec8-a029-0d0d1448964f", 00:16:18.078 "assigned_rate_limits": { 00:16:18.078 "rw_ios_per_sec": 0, 00:16:18.078 "rw_mbytes_per_sec": 0, 00:16:18.078 "r_mbytes_per_sec": 0, 00:16:18.078 "w_mbytes_per_sec": 0 00:16:18.078 }, 00:16:18.078 "claimed": true, 00:16:18.078 "claim_type": "exclusive_write", 00:16:18.078 "zoned": false, 00:16:18.078 "supported_io_types": { 00:16:18.078 "read": true, 00:16:18.078 "write": true, 00:16:18.078 "unmap": true, 00:16:18.078 "flush": true, 00:16:18.078 "reset": true, 00:16:18.078 "nvme_admin": false, 00:16:18.078 "nvme_io": false, 00:16:18.078 "nvme_io_md": false, 00:16:18.078 "write_zeroes": true, 00:16:18.078 "zcopy": true, 00:16:18.078 "get_zone_info": false, 00:16:18.078 "zone_management": false, 00:16:18.078 "zone_append": false, 00:16:18.078 "compare": false, 00:16:18.078 "compare_and_write": false, 00:16:18.078 "abort": true, 00:16:18.078 "seek_hole": false, 00:16:18.078 "seek_data": false, 00:16:18.078 "copy": true, 00:16:18.078 "nvme_iov_md": false 00:16:18.078 }, 00:16:18.078 "memory_domains": [ 00:16:18.078 { 00:16:18.078 "dma_device_id": "system", 00:16:18.078 "dma_device_type": 1 00:16:18.078 }, 00:16:18.078 { 00:16:18.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.078 "dma_device_type": 2 00:16:18.078 } 00:16:18.078 ], 00:16:18.078 "driver_specific": {} 00:16:18.078 } 00:16:18.078 ] 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.078 11:26:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.078 11:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.078 11:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.078 "name": "Existed_Raid", 00:16:18.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.078 "strip_size_kb": 64, 00:16:18.078 "state": "configuring", 00:16:18.078 "raid_level": "raid5f", 00:16:18.078 "superblock": false, 00:16:18.078 "num_base_bdevs": 3, 00:16:18.078 "num_base_bdevs_discovered": 1, 00:16:18.078 "num_base_bdevs_operational": 3, 00:16:18.078 "base_bdevs_list": [ 00:16:18.078 { 00:16:18.078 "name": "BaseBdev1", 00:16:18.078 "uuid": "548b3020-fa63-4ec8-a029-0d0d1448964f", 00:16:18.078 "is_configured": true, 00:16:18.078 "data_offset": 0, 00:16:18.078 "data_size": 65536 00:16:18.078 }, 00:16:18.078 { 00:16:18.078 "name": "BaseBdev2", 00:16:18.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.078 "is_configured": false, 00:16:18.078 "data_offset": 0, 00:16:18.078 "data_size": 0 00:16:18.078 }, 00:16:18.078 { 00:16:18.078 "name": "BaseBdev3", 00:16:18.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.078 "is_configured": false, 00:16:18.078 "data_offset": 0, 00:16:18.078 "data_size": 0 00:16:18.078 } 00:16:18.078 ] 00:16:18.078 }' 00:16:18.078 11:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.078 11:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.645 [2024-11-20 11:26:01.471430] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:18.645 [2024-11-20 11:26:01.471518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.645 [2024-11-20 11:26:01.479503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.645 [2024-11-20 11:26:01.481707] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:18.645 [2024-11-20 11:26:01.481814] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:18.645 [2024-11-20 11:26:01.481859] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:18.645 [2024-11-20 11:26:01.481891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.645 "name": "Existed_Raid", 00:16:18.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.645 "strip_size_kb": 64, 00:16:18.645 "state": "configuring", 00:16:18.645 "raid_level": "raid5f", 00:16:18.645 "superblock": false, 00:16:18.645 "num_base_bdevs": 3, 00:16:18.645 "num_base_bdevs_discovered": 1, 00:16:18.645 "num_base_bdevs_operational": 3, 00:16:18.645 "base_bdevs_list": [ 00:16:18.645 { 00:16:18.645 "name": "BaseBdev1", 00:16:18.645 "uuid": "548b3020-fa63-4ec8-a029-0d0d1448964f", 00:16:18.645 "is_configured": true, 00:16:18.645 "data_offset": 0, 00:16:18.645 "data_size": 65536 00:16:18.645 }, 00:16:18.645 { 00:16:18.645 "name": "BaseBdev2", 00:16:18.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.645 "is_configured": false, 00:16:18.645 "data_offset": 0, 00:16:18.645 "data_size": 0 00:16:18.645 }, 00:16:18.645 { 00:16:18.645 "name": "BaseBdev3", 00:16:18.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.645 "is_configured": false, 00:16:18.645 "data_offset": 0, 00:16:18.645 "data_size": 0 00:16:18.645 } 00:16:18.645 ] 00:16:18.645 }' 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.645 11:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.903 11:26:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:18.903 11:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.904 11:26:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.164 [2024-11-20 11:26:02.024338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:19.164 BaseBdev2 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.164 [ 00:16:19.164 { 00:16:19.164 "name": "BaseBdev2", 00:16:19.164 "aliases": [ 00:16:19.164 "eaea85ec-93b9-4e9d-94ff-64cbf9769bc8" 00:16:19.164 ], 00:16:19.164 "product_name": "Malloc disk", 00:16:19.164 "block_size": 512, 00:16:19.164 "num_blocks": 65536, 00:16:19.164 "uuid": "eaea85ec-93b9-4e9d-94ff-64cbf9769bc8", 00:16:19.164 "assigned_rate_limits": { 00:16:19.164 "rw_ios_per_sec": 0, 00:16:19.164 "rw_mbytes_per_sec": 0, 00:16:19.164 "r_mbytes_per_sec": 0, 00:16:19.164 "w_mbytes_per_sec": 0 00:16:19.164 }, 00:16:19.164 "claimed": true, 00:16:19.164 "claim_type": "exclusive_write", 00:16:19.164 "zoned": false, 00:16:19.164 "supported_io_types": { 00:16:19.164 "read": true, 00:16:19.164 "write": true, 00:16:19.164 "unmap": true, 00:16:19.164 "flush": true, 00:16:19.164 "reset": true, 00:16:19.164 "nvme_admin": false, 00:16:19.164 "nvme_io": false, 00:16:19.164 "nvme_io_md": false, 00:16:19.164 "write_zeroes": true, 00:16:19.164 "zcopy": true, 00:16:19.164 "get_zone_info": false, 00:16:19.164 "zone_management": false, 00:16:19.164 "zone_append": false, 00:16:19.164 "compare": false, 00:16:19.164 "compare_and_write": false, 00:16:19.164 "abort": true, 00:16:19.164 "seek_hole": false, 00:16:19.164 "seek_data": false, 00:16:19.164 "copy": true, 00:16:19.164 "nvme_iov_md": false 00:16:19.164 }, 00:16:19.164 "memory_domains": [ 00:16:19.164 { 00:16:19.164 "dma_device_id": "system", 00:16:19.164 "dma_device_type": 1 00:16:19.164 }, 00:16:19.164 { 00:16:19.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.164 "dma_device_type": 2 00:16:19.164 } 00:16:19.164 ], 00:16:19.164 "driver_specific": {} 00:16:19.164 } 00:16:19.164 ] 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.164 "name": "Existed_Raid", 00:16:19.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.164 "strip_size_kb": 64, 00:16:19.164 "state": "configuring", 00:16:19.164 "raid_level": "raid5f", 00:16:19.164 "superblock": false, 00:16:19.164 "num_base_bdevs": 3, 00:16:19.164 "num_base_bdevs_discovered": 2, 00:16:19.164 "num_base_bdevs_operational": 3, 00:16:19.164 "base_bdevs_list": [ 00:16:19.164 { 00:16:19.164 "name": "BaseBdev1", 00:16:19.164 "uuid": "548b3020-fa63-4ec8-a029-0d0d1448964f", 00:16:19.164 "is_configured": true, 00:16:19.164 "data_offset": 0, 00:16:19.164 "data_size": 65536 00:16:19.164 }, 00:16:19.164 { 00:16:19.164 "name": "BaseBdev2", 00:16:19.164 "uuid": "eaea85ec-93b9-4e9d-94ff-64cbf9769bc8", 00:16:19.164 "is_configured": true, 00:16:19.164 "data_offset": 0, 00:16:19.164 "data_size": 65536 00:16:19.164 }, 00:16:19.164 { 00:16:19.164 "name": "BaseBdev3", 00:16:19.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.164 "is_configured": false, 00:16:19.164 "data_offset": 0, 00:16:19.164 "data_size": 0 00:16:19.164 } 00:16:19.164 ] 00:16:19.164 }' 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.164 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.446 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:19.446 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.446 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.723 [2024-11-20 11:26:02.578908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:19.723 [2024-11-20 11:26:02.579079] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:19.723 [2024-11-20 11:26:02.579133] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:19.723 [2024-11-20 11:26:02.579496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:19.723 [2024-11-20 11:26:02.586128] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:19.723 [2024-11-20 11:26:02.586200] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:19.723 [2024-11-20 11:26:02.586614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.723 BaseBdev3 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.723 [ 00:16:19.723 { 00:16:19.723 "name": "BaseBdev3", 00:16:19.723 "aliases": [ 00:16:19.723 "627fc985-8966-4593-bb98-ce65909fdf06" 00:16:19.723 ], 00:16:19.723 "product_name": "Malloc disk", 00:16:19.723 "block_size": 512, 00:16:19.723 "num_blocks": 65536, 00:16:19.723 "uuid": "627fc985-8966-4593-bb98-ce65909fdf06", 00:16:19.723 "assigned_rate_limits": { 00:16:19.723 "rw_ios_per_sec": 0, 00:16:19.723 "rw_mbytes_per_sec": 0, 00:16:19.723 "r_mbytes_per_sec": 0, 00:16:19.723 "w_mbytes_per_sec": 0 00:16:19.723 }, 00:16:19.723 "claimed": true, 00:16:19.723 "claim_type": "exclusive_write", 00:16:19.723 "zoned": false, 00:16:19.723 "supported_io_types": { 00:16:19.723 "read": true, 00:16:19.723 "write": true, 00:16:19.723 "unmap": true, 00:16:19.723 "flush": true, 00:16:19.723 "reset": true, 00:16:19.723 "nvme_admin": false, 00:16:19.723 "nvme_io": false, 00:16:19.723 "nvme_io_md": false, 00:16:19.723 "write_zeroes": true, 00:16:19.723 "zcopy": true, 00:16:19.723 "get_zone_info": false, 00:16:19.723 "zone_management": false, 00:16:19.723 "zone_append": false, 00:16:19.723 "compare": false, 00:16:19.723 "compare_and_write": false, 00:16:19.723 "abort": true, 00:16:19.723 "seek_hole": false, 00:16:19.723 "seek_data": false, 00:16:19.723 "copy": true, 00:16:19.723 "nvme_iov_md": false 00:16:19.723 }, 00:16:19.723 "memory_domains": [ 00:16:19.723 { 00:16:19.723 "dma_device_id": "system", 00:16:19.723 "dma_device_type": 1 00:16:19.723 }, 00:16:19.723 { 00:16:19.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.723 "dma_device_type": 2 00:16:19.723 } 00:16:19.723 ], 00:16:19.723 "driver_specific": {} 00:16:19.723 } 00:16:19.723 ] 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.723 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.723 "name": "Existed_Raid", 00:16:19.723 "uuid": "1ff98147-fa10-4942-a40b-331067d95f51", 00:16:19.723 "strip_size_kb": 64, 00:16:19.723 "state": "online", 00:16:19.723 "raid_level": "raid5f", 00:16:19.723 "superblock": false, 00:16:19.723 "num_base_bdevs": 3, 00:16:19.723 "num_base_bdevs_discovered": 3, 00:16:19.723 "num_base_bdevs_operational": 3, 00:16:19.723 "base_bdevs_list": [ 00:16:19.723 { 00:16:19.723 "name": "BaseBdev1", 00:16:19.724 "uuid": "548b3020-fa63-4ec8-a029-0d0d1448964f", 00:16:19.724 "is_configured": true, 00:16:19.724 "data_offset": 0, 00:16:19.724 "data_size": 65536 00:16:19.724 }, 00:16:19.724 { 00:16:19.724 "name": "BaseBdev2", 00:16:19.724 "uuid": "eaea85ec-93b9-4e9d-94ff-64cbf9769bc8", 00:16:19.724 "is_configured": true, 00:16:19.724 "data_offset": 0, 00:16:19.724 "data_size": 65536 00:16:19.724 }, 00:16:19.724 { 00:16:19.724 "name": "BaseBdev3", 00:16:19.724 "uuid": "627fc985-8966-4593-bb98-ce65909fdf06", 00:16:19.724 "is_configured": true, 00:16:19.724 "data_offset": 0, 00:16:19.724 "data_size": 65536 00:16:19.724 } 00:16:19.724 ] 00:16:19.724 }' 00:16:19.724 11:26:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.724 11:26:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.290 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:20.290 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:20.290 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:20.290 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:20.290 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:20.290 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:20.290 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:20.290 11:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.290 11:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.290 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:20.290 [2024-11-20 11:26:03.125489] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:20.290 11:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.290 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:20.290 "name": "Existed_Raid", 00:16:20.290 "aliases": [ 00:16:20.290 "1ff98147-fa10-4942-a40b-331067d95f51" 00:16:20.290 ], 00:16:20.290 "product_name": "Raid Volume", 00:16:20.290 "block_size": 512, 00:16:20.290 "num_blocks": 131072, 00:16:20.290 "uuid": "1ff98147-fa10-4942-a40b-331067d95f51", 00:16:20.290 "assigned_rate_limits": { 00:16:20.290 "rw_ios_per_sec": 0, 00:16:20.290 "rw_mbytes_per_sec": 0, 00:16:20.290 "r_mbytes_per_sec": 0, 00:16:20.290 "w_mbytes_per_sec": 0 00:16:20.290 }, 00:16:20.290 "claimed": false, 00:16:20.290 "zoned": false, 00:16:20.290 "supported_io_types": { 00:16:20.290 "read": true, 00:16:20.290 "write": true, 00:16:20.290 "unmap": false, 00:16:20.290 "flush": false, 00:16:20.290 "reset": true, 00:16:20.290 "nvme_admin": false, 00:16:20.290 "nvme_io": false, 00:16:20.290 "nvme_io_md": false, 00:16:20.290 "write_zeroes": true, 00:16:20.290 "zcopy": false, 00:16:20.290 "get_zone_info": false, 00:16:20.290 "zone_management": false, 00:16:20.290 "zone_append": false, 00:16:20.290 "compare": false, 00:16:20.290 "compare_and_write": false, 00:16:20.290 "abort": false, 00:16:20.290 "seek_hole": false, 00:16:20.290 "seek_data": false, 00:16:20.290 "copy": false, 00:16:20.290 "nvme_iov_md": false 00:16:20.290 }, 00:16:20.290 "driver_specific": { 00:16:20.290 "raid": { 00:16:20.290 "uuid": "1ff98147-fa10-4942-a40b-331067d95f51", 00:16:20.290 "strip_size_kb": 64, 00:16:20.290 "state": "online", 00:16:20.290 "raid_level": "raid5f", 00:16:20.290 "superblock": false, 00:16:20.290 "num_base_bdevs": 3, 00:16:20.290 "num_base_bdevs_discovered": 3, 00:16:20.290 "num_base_bdevs_operational": 3, 00:16:20.290 "base_bdevs_list": [ 00:16:20.290 { 00:16:20.290 "name": "BaseBdev1", 00:16:20.290 "uuid": "548b3020-fa63-4ec8-a029-0d0d1448964f", 00:16:20.290 "is_configured": true, 00:16:20.290 "data_offset": 0, 00:16:20.290 "data_size": 65536 00:16:20.290 }, 00:16:20.290 { 00:16:20.290 "name": "BaseBdev2", 00:16:20.290 "uuid": "eaea85ec-93b9-4e9d-94ff-64cbf9769bc8", 00:16:20.290 "is_configured": true, 00:16:20.291 "data_offset": 0, 00:16:20.291 "data_size": 65536 00:16:20.291 }, 00:16:20.291 { 00:16:20.291 "name": "BaseBdev3", 00:16:20.291 "uuid": "627fc985-8966-4593-bb98-ce65909fdf06", 00:16:20.291 "is_configured": true, 00:16:20.291 "data_offset": 0, 00:16:20.291 "data_size": 65536 00:16:20.291 } 00:16:20.291 ] 00:16:20.291 } 00:16:20.291 } 00:16:20.291 }' 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:20.291 BaseBdev2 00:16:20.291 BaseBdev3' 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.291 11:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.549 [2024-11-20 11:26:03.408856] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:20.549 11:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.549 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:20.549 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:20.549 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:20.549 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:20.549 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:20.549 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:20.549 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.549 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.549 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.549 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.549 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:20.549 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.549 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.549 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.549 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.549 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.549 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.549 11:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.549 11:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.549 11:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.549 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.549 "name": "Existed_Raid", 00:16:20.549 "uuid": "1ff98147-fa10-4942-a40b-331067d95f51", 00:16:20.549 "strip_size_kb": 64, 00:16:20.549 "state": "online", 00:16:20.549 "raid_level": "raid5f", 00:16:20.549 "superblock": false, 00:16:20.549 "num_base_bdevs": 3, 00:16:20.549 "num_base_bdevs_discovered": 2, 00:16:20.549 "num_base_bdevs_operational": 2, 00:16:20.549 "base_bdevs_list": [ 00:16:20.549 { 00:16:20.549 "name": null, 00:16:20.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.549 "is_configured": false, 00:16:20.549 "data_offset": 0, 00:16:20.549 "data_size": 65536 00:16:20.549 }, 00:16:20.549 { 00:16:20.549 "name": "BaseBdev2", 00:16:20.549 "uuid": "eaea85ec-93b9-4e9d-94ff-64cbf9769bc8", 00:16:20.549 "is_configured": true, 00:16:20.549 "data_offset": 0, 00:16:20.549 "data_size": 65536 00:16:20.549 }, 00:16:20.549 { 00:16:20.549 "name": "BaseBdev3", 00:16:20.549 "uuid": "627fc985-8966-4593-bb98-ce65909fdf06", 00:16:20.549 "is_configured": true, 00:16:20.549 "data_offset": 0, 00:16:20.549 "data_size": 65536 00:16:20.549 } 00:16:20.549 ] 00:16:20.549 }' 00:16:20.549 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.549 11:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.116 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:21.116 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:21.116 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.116 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:21.116 11:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.116 11:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.116 11:26:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.116 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:21.116 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:21.116 11:26:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:21.116 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.116 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.116 [2024-11-20 11:26:04.005123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:21.116 [2024-11-20 11:26:04.005234] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.116 [2024-11-20 11:26:04.116926] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.116 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.116 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:21.116 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:21.116 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:21.116 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.116 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.116 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.116 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.116 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:21.116 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:21.116 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:21.116 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.116 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.116 [2024-11-20 11:26:04.176901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:21.116 [2024-11-20 11:26:04.177038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:21.375 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.375 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:21.375 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:21.375 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:21.375 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.375 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.375 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.375 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.375 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:21.375 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:21.375 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:21.375 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:21.375 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:21.375 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:21.375 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.375 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.375 BaseBdev2 00:16:21.375 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.375 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.376 [ 00:16:21.376 { 00:16:21.376 "name": "BaseBdev2", 00:16:21.376 "aliases": [ 00:16:21.376 "d74dd97a-f027-40bd-af61-17a107ffdb2b" 00:16:21.376 ], 00:16:21.376 "product_name": "Malloc disk", 00:16:21.376 "block_size": 512, 00:16:21.376 "num_blocks": 65536, 00:16:21.376 "uuid": "d74dd97a-f027-40bd-af61-17a107ffdb2b", 00:16:21.376 "assigned_rate_limits": { 00:16:21.376 "rw_ios_per_sec": 0, 00:16:21.376 "rw_mbytes_per_sec": 0, 00:16:21.376 "r_mbytes_per_sec": 0, 00:16:21.376 "w_mbytes_per_sec": 0 00:16:21.376 }, 00:16:21.376 "claimed": false, 00:16:21.376 "zoned": false, 00:16:21.376 "supported_io_types": { 00:16:21.376 "read": true, 00:16:21.376 "write": true, 00:16:21.376 "unmap": true, 00:16:21.376 "flush": true, 00:16:21.376 "reset": true, 00:16:21.376 "nvme_admin": false, 00:16:21.376 "nvme_io": false, 00:16:21.376 "nvme_io_md": false, 00:16:21.376 "write_zeroes": true, 00:16:21.376 "zcopy": true, 00:16:21.376 "get_zone_info": false, 00:16:21.376 "zone_management": false, 00:16:21.376 "zone_append": false, 00:16:21.376 "compare": false, 00:16:21.376 "compare_and_write": false, 00:16:21.376 "abort": true, 00:16:21.376 "seek_hole": false, 00:16:21.376 "seek_data": false, 00:16:21.376 "copy": true, 00:16:21.376 "nvme_iov_md": false 00:16:21.376 }, 00:16:21.376 "memory_domains": [ 00:16:21.376 { 00:16:21.376 "dma_device_id": "system", 00:16:21.376 "dma_device_type": 1 00:16:21.376 }, 00:16:21.376 { 00:16:21.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.376 "dma_device_type": 2 00:16:21.376 } 00:16:21.376 ], 00:16:21.376 "driver_specific": {} 00:16:21.376 } 00:16:21.376 ] 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.376 BaseBdev3 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.376 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:21.635 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.635 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.635 [ 00:16:21.635 { 00:16:21.635 "name": "BaseBdev3", 00:16:21.635 "aliases": [ 00:16:21.635 "c81dce59-4610-4641-b77d-6b92341d61ce" 00:16:21.635 ], 00:16:21.635 "product_name": "Malloc disk", 00:16:21.635 "block_size": 512, 00:16:21.635 "num_blocks": 65536, 00:16:21.635 "uuid": "c81dce59-4610-4641-b77d-6b92341d61ce", 00:16:21.635 "assigned_rate_limits": { 00:16:21.635 "rw_ios_per_sec": 0, 00:16:21.635 "rw_mbytes_per_sec": 0, 00:16:21.635 "r_mbytes_per_sec": 0, 00:16:21.635 "w_mbytes_per_sec": 0 00:16:21.635 }, 00:16:21.635 "claimed": false, 00:16:21.635 "zoned": false, 00:16:21.635 "supported_io_types": { 00:16:21.635 "read": true, 00:16:21.635 "write": true, 00:16:21.635 "unmap": true, 00:16:21.635 "flush": true, 00:16:21.635 "reset": true, 00:16:21.635 "nvme_admin": false, 00:16:21.635 "nvme_io": false, 00:16:21.635 "nvme_io_md": false, 00:16:21.635 "write_zeroes": true, 00:16:21.635 "zcopy": true, 00:16:21.635 "get_zone_info": false, 00:16:21.635 "zone_management": false, 00:16:21.635 "zone_append": false, 00:16:21.635 "compare": false, 00:16:21.635 "compare_and_write": false, 00:16:21.635 "abort": true, 00:16:21.635 "seek_hole": false, 00:16:21.635 "seek_data": false, 00:16:21.635 "copy": true, 00:16:21.635 "nvme_iov_md": false 00:16:21.635 }, 00:16:21.635 "memory_domains": [ 00:16:21.635 { 00:16:21.635 "dma_device_id": "system", 00:16:21.635 "dma_device_type": 1 00:16:21.635 }, 00:16:21.635 { 00:16:21.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.635 "dma_device_type": 2 00:16:21.635 } 00:16:21.635 ], 00:16:21.635 "driver_specific": {} 00:16:21.635 } 00:16:21.635 ] 00:16:21.635 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.635 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:21.635 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:21.635 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:21.635 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:21.635 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.635 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.635 [2024-11-20 11:26:04.524200] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:21.635 [2024-11-20 11:26:04.524425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:21.635 [2024-11-20 11:26:04.524585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:21.635 [2024-11-20 11:26:04.527836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:21.635 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.635 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:21.635 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.635 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.635 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.635 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.635 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.635 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.635 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.635 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.636 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.636 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.636 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.636 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.636 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.636 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.636 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.636 "name": "Existed_Raid", 00:16:21.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.636 "strip_size_kb": 64, 00:16:21.636 "state": "configuring", 00:16:21.636 "raid_level": "raid5f", 00:16:21.636 "superblock": false, 00:16:21.636 "num_base_bdevs": 3, 00:16:21.636 "num_base_bdevs_discovered": 2, 00:16:21.636 "num_base_bdevs_operational": 3, 00:16:21.636 "base_bdevs_list": [ 00:16:21.636 { 00:16:21.636 "name": "BaseBdev1", 00:16:21.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.636 "is_configured": false, 00:16:21.636 "data_offset": 0, 00:16:21.636 "data_size": 0 00:16:21.636 }, 00:16:21.636 { 00:16:21.636 "name": "BaseBdev2", 00:16:21.636 "uuid": "d74dd97a-f027-40bd-af61-17a107ffdb2b", 00:16:21.636 "is_configured": true, 00:16:21.636 "data_offset": 0, 00:16:21.636 "data_size": 65536 00:16:21.636 }, 00:16:21.636 { 00:16:21.636 "name": "BaseBdev3", 00:16:21.636 "uuid": "c81dce59-4610-4641-b77d-6b92341d61ce", 00:16:21.636 "is_configured": true, 00:16:21.636 "data_offset": 0, 00:16:21.636 "data_size": 65536 00:16:21.636 } 00:16:21.636 ] 00:16:21.636 }' 00:16:21.636 11:26:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.636 11:26:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.202 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:22.202 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.202 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.202 [2024-11-20 11:26:05.011556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:22.202 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.202 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:22.202 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.202 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.202 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.202 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.202 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.202 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.202 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.202 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.202 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.202 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.202 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.202 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.202 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.202 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.202 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.202 "name": "Existed_Raid", 00:16:22.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.202 "strip_size_kb": 64, 00:16:22.202 "state": "configuring", 00:16:22.202 "raid_level": "raid5f", 00:16:22.202 "superblock": false, 00:16:22.202 "num_base_bdevs": 3, 00:16:22.202 "num_base_bdevs_discovered": 1, 00:16:22.202 "num_base_bdevs_operational": 3, 00:16:22.202 "base_bdevs_list": [ 00:16:22.202 { 00:16:22.202 "name": "BaseBdev1", 00:16:22.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.202 "is_configured": false, 00:16:22.202 "data_offset": 0, 00:16:22.202 "data_size": 0 00:16:22.202 }, 00:16:22.202 { 00:16:22.202 "name": null, 00:16:22.202 "uuid": "d74dd97a-f027-40bd-af61-17a107ffdb2b", 00:16:22.202 "is_configured": false, 00:16:22.202 "data_offset": 0, 00:16:22.202 "data_size": 65536 00:16:22.202 }, 00:16:22.202 { 00:16:22.202 "name": "BaseBdev3", 00:16:22.202 "uuid": "c81dce59-4610-4641-b77d-6b92341d61ce", 00:16:22.202 "is_configured": true, 00:16:22.202 "data_offset": 0, 00:16:22.202 "data_size": 65536 00:16:22.202 } 00:16:22.202 ] 00:16:22.202 }' 00:16:22.202 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.202 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.461 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.461 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.461 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.461 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:22.461 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.461 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:22.461 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:22.461 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.461 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.461 [2024-11-20 11:26:05.535438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.461 BaseBdev1 00:16:22.461 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.461 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:22.461 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:22.461 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:22.461 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:22.461 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:22.461 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:22.461 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:22.461 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.461 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.461 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.461 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:22.461 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.461 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.461 [ 00:16:22.461 { 00:16:22.461 "name": "BaseBdev1", 00:16:22.461 "aliases": [ 00:16:22.461 "ff33da84-9e2e-4e04-addd-d25803ee1e44" 00:16:22.461 ], 00:16:22.461 "product_name": "Malloc disk", 00:16:22.461 "block_size": 512, 00:16:22.461 "num_blocks": 65536, 00:16:22.461 "uuid": "ff33da84-9e2e-4e04-addd-d25803ee1e44", 00:16:22.461 "assigned_rate_limits": { 00:16:22.461 "rw_ios_per_sec": 0, 00:16:22.461 "rw_mbytes_per_sec": 0, 00:16:22.461 "r_mbytes_per_sec": 0, 00:16:22.461 "w_mbytes_per_sec": 0 00:16:22.461 }, 00:16:22.461 "claimed": true, 00:16:22.461 "claim_type": "exclusive_write", 00:16:22.461 "zoned": false, 00:16:22.461 "supported_io_types": { 00:16:22.461 "read": true, 00:16:22.461 "write": true, 00:16:22.461 "unmap": true, 00:16:22.461 "flush": true, 00:16:22.461 "reset": true, 00:16:22.461 "nvme_admin": false, 00:16:22.461 "nvme_io": false, 00:16:22.461 "nvme_io_md": false, 00:16:22.461 "write_zeroes": true, 00:16:22.461 "zcopy": true, 00:16:22.461 "get_zone_info": false, 00:16:22.461 "zone_management": false, 00:16:22.461 "zone_append": false, 00:16:22.461 "compare": false, 00:16:22.461 "compare_and_write": false, 00:16:22.461 "abort": true, 00:16:22.461 "seek_hole": false, 00:16:22.461 "seek_data": false, 00:16:22.461 "copy": true, 00:16:22.461 "nvme_iov_md": false 00:16:22.461 }, 00:16:22.461 "memory_domains": [ 00:16:22.720 { 00:16:22.720 "dma_device_id": "system", 00:16:22.720 "dma_device_type": 1 00:16:22.720 }, 00:16:22.720 { 00:16:22.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.720 "dma_device_type": 2 00:16:22.720 } 00:16:22.720 ], 00:16:22.720 "driver_specific": {} 00:16:22.720 } 00:16:22.720 ] 00:16:22.720 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.720 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:22.720 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:22.720 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.720 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.720 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.720 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.720 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.720 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.720 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.720 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.720 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.720 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.720 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.720 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.720 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.720 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.720 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.720 "name": "Existed_Raid", 00:16:22.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.720 "strip_size_kb": 64, 00:16:22.720 "state": "configuring", 00:16:22.720 "raid_level": "raid5f", 00:16:22.720 "superblock": false, 00:16:22.720 "num_base_bdevs": 3, 00:16:22.720 "num_base_bdevs_discovered": 2, 00:16:22.720 "num_base_bdevs_operational": 3, 00:16:22.720 "base_bdevs_list": [ 00:16:22.720 { 00:16:22.720 "name": "BaseBdev1", 00:16:22.720 "uuid": "ff33da84-9e2e-4e04-addd-d25803ee1e44", 00:16:22.720 "is_configured": true, 00:16:22.720 "data_offset": 0, 00:16:22.720 "data_size": 65536 00:16:22.720 }, 00:16:22.720 { 00:16:22.720 "name": null, 00:16:22.720 "uuid": "d74dd97a-f027-40bd-af61-17a107ffdb2b", 00:16:22.720 "is_configured": false, 00:16:22.720 "data_offset": 0, 00:16:22.720 "data_size": 65536 00:16:22.720 }, 00:16:22.720 { 00:16:22.720 "name": "BaseBdev3", 00:16:22.720 "uuid": "c81dce59-4610-4641-b77d-6b92341d61ce", 00:16:22.720 "is_configured": true, 00:16:22.720 "data_offset": 0, 00:16:22.720 "data_size": 65536 00:16:22.720 } 00:16:22.720 ] 00:16:22.720 }' 00:16:22.720 11:26:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.720 11:26:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.978 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.978 11:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.978 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:22.978 11:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.979 11:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.979 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:22.979 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:22.979 11:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.979 11:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.979 [2024-11-20 11:26:06.082604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:22.979 11:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.979 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:22.979 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.979 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.979 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.979 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.979 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.979 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.979 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.979 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.979 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.237 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.237 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.237 11:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.237 11:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.237 11:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.237 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.237 "name": "Existed_Raid", 00:16:23.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.237 "strip_size_kb": 64, 00:16:23.237 "state": "configuring", 00:16:23.237 "raid_level": "raid5f", 00:16:23.237 "superblock": false, 00:16:23.237 "num_base_bdevs": 3, 00:16:23.237 "num_base_bdevs_discovered": 1, 00:16:23.237 "num_base_bdevs_operational": 3, 00:16:23.237 "base_bdevs_list": [ 00:16:23.237 { 00:16:23.237 "name": "BaseBdev1", 00:16:23.237 "uuid": "ff33da84-9e2e-4e04-addd-d25803ee1e44", 00:16:23.237 "is_configured": true, 00:16:23.237 "data_offset": 0, 00:16:23.237 "data_size": 65536 00:16:23.237 }, 00:16:23.237 { 00:16:23.237 "name": null, 00:16:23.237 "uuid": "d74dd97a-f027-40bd-af61-17a107ffdb2b", 00:16:23.237 "is_configured": false, 00:16:23.237 "data_offset": 0, 00:16:23.237 "data_size": 65536 00:16:23.237 }, 00:16:23.237 { 00:16:23.237 "name": null, 00:16:23.237 "uuid": "c81dce59-4610-4641-b77d-6b92341d61ce", 00:16:23.237 "is_configured": false, 00:16:23.237 "data_offset": 0, 00:16:23.237 "data_size": 65536 00:16:23.237 } 00:16:23.237 ] 00:16:23.237 }' 00:16:23.237 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.237 11:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.495 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.495 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:23.495 11:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.495 11:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.495 11:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.754 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:23.754 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:23.754 11:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.754 11:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.754 [2024-11-20 11:26:06.621725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:23.754 11:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.754 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:23.755 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.755 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.755 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.755 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.755 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:23.755 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.755 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.755 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.755 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.755 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.755 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.755 11:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.755 11:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.755 11:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.755 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.755 "name": "Existed_Raid", 00:16:23.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.755 "strip_size_kb": 64, 00:16:23.755 "state": "configuring", 00:16:23.755 "raid_level": "raid5f", 00:16:23.755 "superblock": false, 00:16:23.755 "num_base_bdevs": 3, 00:16:23.755 "num_base_bdevs_discovered": 2, 00:16:23.755 "num_base_bdevs_operational": 3, 00:16:23.755 "base_bdevs_list": [ 00:16:23.755 { 00:16:23.755 "name": "BaseBdev1", 00:16:23.755 "uuid": "ff33da84-9e2e-4e04-addd-d25803ee1e44", 00:16:23.755 "is_configured": true, 00:16:23.755 "data_offset": 0, 00:16:23.755 "data_size": 65536 00:16:23.755 }, 00:16:23.755 { 00:16:23.755 "name": null, 00:16:23.755 "uuid": "d74dd97a-f027-40bd-af61-17a107ffdb2b", 00:16:23.755 "is_configured": false, 00:16:23.755 "data_offset": 0, 00:16:23.755 "data_size": 65536 00:16:23.755 }, 00:16:23.755 { 00:16:23.755 "name": "BaseBdev3", 00:16:23.755 "uuid": "c81dce59-4610-4641-b77d-6b92341d61ce", 00:16:23.755 "is_configured": true, 00:16:23.755 "data_offset": 0, 00:16:23.755 "data_size": 65536 00:16:23.755 } 00:16:23.755 ] 00:16:23.755 }' 00:16:23.755 11:26:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.755 11:26:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.321 [2024-11-20 11:26:07.196797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.321 "name": "Existed_Raid", 00:16:24.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.321 "strip_size_kb": 64, 00:16:24.321 "state": "configuring", 00:16:24.321 "raid_level": "raid5f", 00:16:24.321 "superblock": false, 00:16:24.321 "num_base_bdevs": 3, 00:16:24.321 "num_base_bdevs_discovered": 1, 00:16:24.321 "num_base_bdevs_operational": 3, 00:16:24.321 "base_bdevs_list": [ 00:16:24.321 { 00:16:24.321 "name": null, 00:16:24.321 "uuid": "ff33da84-9e2e-4e04-addd-d25803ee1e44", 00:16:24.321 "is_configured": false, 00:16:24.321 "data_offset": 0, 00:16:24.321 "data_size": 65536 00:16:24.321 }, 00:16:24.321 { 00:16:24.321 "name": null, 00:16:24.321 "uuid": "d74dd97a-f027-40bd-af61-17a107ffdb2b", 00:16:24.321 "is_configured": false, 00:16:24.321 "data_offset": 0, 00:16:24.321 "data_size": 65536 00:16:24.321 }, 00:16:24.321 { 00:16:24.321 "name": "BaseBdev3", 00:16:24.321 "uuid": "c81dce59-4610-4641-b77d-6b92341d61ce", 00:16:24.321 "is_configured": true, 00:16:24.321 "data_offset": 0, 00:16:24.321 "data_size": 65536 00:16:24.321 } 00:16:24.321 ] 00:16:24.321 }' 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.321 11:26:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.886 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:24.886 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.886 11:26:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.887 11:26:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.887 11:26:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.887 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:24.887 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:24.887 11:26:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.887 11:26:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.887 [2024-11-20 11:26:07.871872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:24.887 11:26:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.887 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:24.887 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.887 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.887 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.887 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.887 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:24.887 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.887 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.887 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.887 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.887 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.887 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.887 11:26:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.887 11:26:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.887 11:26:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.887 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.887 "name": "Existed_Raid", 00:16:24.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.887 "strip_size_kb": 64, 00:16:24.887 "state": "configuring", 00:16:24.887 "raid_level": "raid5f", 00:16:24.887 "superblock": false, 00:16:24.887 "num_base_bdevs": 3, 00:16:24.887 "num_base_bdevs_discovered": 2, 00:16:24.887 "num_base_bdevs_operational": 3, 00:16:24.887 "base_bdevs_list": [ 00:16:24.887 { 00:16:24.887 "name": null, 00:16:24.887 "uuid": "ff33da84-9e2e-4e04-addd-d25803ee1e44", 00:16:24.887 "is_configured": false, 00:16:24.887 "data_offset": 0, 00:16:24.887 "data_size": 65536 00:16:24.887 }, 00:16:24.887 { 00:16:24.887 "name": "BaseBdev2", 00:16:24.887 "uuid": "d74dd97a-f027-40bd-af61-17a107ffdb2b", 00:16:24.887 "is_configured": true, 00:16:24.887 "data_offset": 0, 00:16:24.887 "data_size": 65536 00:16:24.887 }, 00:16:24.887 { 00:16:24.887 "name": "BaseBdev3", 00:16:24.887 "uuid": "c81dce59-4610-4641-b77d-6b92341d61ce", 00:16:24.887 "is_configured": true, 00:16:24.887 "data_offset": 0, 00:16:24.887 "data_size": 65536 00:16:24.887 } 00:16:24.887 ] 00:16:24.887 }' 00:16:24.887 11:26:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.887 11:26:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ff33da84-9e2e-4e04-addd-d25803ee1e44 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.453 [2024-11-20 11:26:08.484314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:25.453 [2024-11-20 11:26:08.484384] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:25.453 [2024-11-20 11:26:08.484395] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:25.453 [2024-11-20 11:26:08.484741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:25.453 [2024-11-20 11:26:08.491217] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:25.453 [2024-11-20 11:26:08.491321] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:25.453 [2024-11-20 11:26:08.491713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.453 NewBaseBdev 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.453 [ 00:16:25.453 { 00:16:25.453 "name": "NewBaseBdev", 00:16:25.453 "aliases": [ 00:16:25.453 "ff33da84-9e2e-4e04-addd-d25803ee1e44" 00:16:25.453 ], 00:16:25.453 "product_name": "Malloc disk", 00:16:25.453 "block_size": 512, 00:16:25.453 "num_blocks": 65536, 00:16:25.453 "uuid": "ff33da84-9e2e-4e04-addd-d25803ee1e44", 00:16:25.453 "assigned_rate_limits": { 00:16:25.453 "rw_ios_per_sec": 0, 00:16:25.453 "rw_mbytes_per_sec": 0, 00:16:25.453 "r_mbytes_per_sec": 0, 00:16:25.453 "w_mbytes_per_sec": 0 00:16:25.453 }, 00:16:25.453 "claimed": true, 00:16:25.453 "claim_type": "exclusive_write", 00:16:25.453 "zoned": false, 00:16:25.453 "supported_io_types": { 00:16:25.453 "read": true, 00:16:25.453 "write": true, 00:16:25.453 "unmap": true, 00:16:25.453 "flush": true, 00:16:25.453 "reset": true, 00:16:25.453 "nvme_admin": false, 00:16:25.453 "nvme_io": false, 00:16:25.453 "nvme_io_md": false, 00:16:25.453 "write_zeroes": true, 00:16:25.453 "zcopy": true, 00:16:25.453 "get_zone_info": false, 00:16:25.453 "zone_management": false, 00:16:25.453 "zone_append": false, 00:16:25.453 "compare": false, 00:16:25.453 "compare_and_write": false, 00:16:25.453 "abort": true, 00:16:25.453 "seek_hole": false, 00:16:25.453 "seek_data": false, 00:16:25.453 "copy": true, 00:16:25.453 "nvme_iov_md": false 00:16:25.453 }, 00:16:25.453 "memory_domains": [ 00:16:25.453 { 00:16:25.453 "dma_device_id": "system", 00:16:25.453 "dma_device_type": 1 00:16:25.453 }, 00:16:25.453 { 00:16:25.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.453 "dma_device_type": 2 00:16:25.453 } 00:16:25.453 ], 00:16:25.453 "driver_specific": {} 00:16:25.453 } 00:16:25.453 ] 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.453 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.711 11:26:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.711 "name": "Existed_Raid", 00:16:25.711 "uuid": "3016fd02-ebb6-4b42-b7ca-0e3ef677b3c2", 00:16:25.711 "strip_size_kb": 64, 00:16:25.711 "state": "online", 00:16:25.711 "raid_level": "raid5f", 00:16:25.711 "superblock": false, 00:16:25.711 "num_base_bdevs": 3, 00:16:25.711 "num_base_bdevs_discovered": 3, 00:16:25.711 "num_base_bdevs_operational": 3, 00:16:25.711 "base_bdevs_list": [ 00:16:25.711 { 00:16:25.711 "name": "NewBaseBdev", 00:16:25.711 "uuid": "ff33da84-9e2e-4e04-addd-d25803ee1e44", 00:16:25.711 "is_configured": true, 00:16:25.711 "data_offset": 0, 00:16:25.711 "data_size": 65536 00:16:25.711 }, 00:16:25.711 { 00:16:25.711 "name": "BaseBdev2", 00:16:25.711 "uuid": "d74dd97a-f027-40bd-af61-17a107ffdb2b", 00:16:25.711 "is_configured": true, 00:16:25.711 "data_offset": 0, 00:16:25.711 "data_size": 65536 00:16:25.711 }, 00:16:25.711 { 00:16:25.711 "name": "BaseBdev3", 00:16:25.711 "uuid": "c81dce59-4610-4641-b77d-6b92341d61ce", 00:16:25.711 "is_configured": true, 00:16:25.711 "data_offset": 0, 00:16:25.711 "data_size": 65536 00:16:25.711 } 00:16:25.711 ] 00:16:25.711 }' 00:16:25.711 11:26:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.711 11:26:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.968 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:25.968 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:25.968 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:25.968 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:25.968 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:25.968 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:25.968 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:25.968 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:25.968 11:26:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.968 11:26:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.968 [2024-11-20 11:26:09.030834] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.968 11:26:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.968 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:25.968 "name": "Existed_Raid", 00:16:25.968 "aliases": [ 00:16:25.968 "3016fd02-ebb6-4b42-b7ca-0e3ef677b3c2" 00:16:25.968 ], 00:16:25.968 "product_name": "Raid Volume", 00:16:25.968 "block_size": 512, 00:16:25.968 "num_blocks": 131072, 00:16:25.968 "uuid": "3016fd02-ebb6-4b42-b7ca-0e3ef677b3c2", 00:16:25.968 "assigned_rate_limits": { 00:16:25.968 "rw_ios_per_sec": 0, 00:16:25.968 "rw_mbytes_per_sec": 0, 00:16:25.968 "r_mbytes_per_sec": 0, 00:16:25.968 "w_mbytes_per_sec": 0 00:16:25.968 }, 00:16:25.968 "claimed": false, 00:16:25.968 "zoned": false, 00:16:25.968 "supported_io_types": { 00:16:25.968 "read": true, 00:16:25.968 "write": true, 00:16:25.968 "unmap": false, 00:16:25.968 "flush": false, 00:16:25.968 "reset": true, 00:16:25.968 "nvme_admin": false, 00:16:25.968 "nvme_io": false, 00:16:25.968 "nvme_io_md": false, 00:16:25.968 "write_zeroes": true, 00:16:25.968 "zcopy": false, 00:16:25.968 "get_zone_info": false, 00:16:25.968 "zone_management": false, 00:16:25.968 "zone_append": false, 00:16:25.968 "compare": false, 00:16:25.968 "compare_and_write": false, 00:16:25.968 "abort": false, 00:16:25.968 "seek_hole": false, 00:16:25.968 "seek_data": false, 00:16:25.968 "copy": false, 00:16:25.968 "nvme_iov_md": false 00:16:25.968 }, 00:16:25.968 "driver_specific": { 00:16:25.968 "raid": { 00:16:25.968 "uuid": "3016fd02-ebb6-4b42-b7ca-0e3ef677b3c2", 00:16:25.968 "strip_size_kb": 64, 00:16:25.968 "state": "online", 00:16:25.968 "raid_level": "raid5f", 00:16:25.968 "superblock": false, 00:16:25.968 "num_base_bdevs": 3, 00:16:25.968 "num_base_bdevs_discovered": 3, 00:16:25.968 "num_base_bdevs_operational": 3, 00:16:25.968 "base_bdevs_list": [ 00:16:25.968 { 00:16:25.968 "name": "NewBaseBdev", 00:16:25.968 "uuid": "ff33da84-9e2e-4e04-addd-d25803ee1e44", 00:16:25.968 "is_configured": true, 00:16:25.968 "data_offset": 0, 00:16:25.968 "data_size": 65536 00:16:25.968 }, 00:16:25.968 { 00:16:25.968 "name": "BaseBdev2", 00:16:25.968 "uuid": "d74dd97a-f027-40bd-af61-17a107ffdb2b", 00:16:25.968 "is_configured": true, 00:16:25.968 "data_offset": 0, 00:16:25.968 "data_size": 65536 00:16:25.968 }, 00:16:25.968 { 00:16:25.968 "name": "BaseBdev3", 00:16:25.968 "uuid": "c81dce59-4610-4641-b77d-6b92341d61ce", 00:16:25.968 "is_configured": true, 00:16:25.968 "data_offset": 0, 00:16:25.968 "data_size": 65536 00:16:25.968 } 00:16:25.968 ] 00:16:25.968 } 00:16:25.968 } 00:16:25.968 }' 00:16:25.968 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:26.226 BaseBdev2 00:16:26.226 BaseBdev3' 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.226 [2024-11-20 11:26:09.314157] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:26.226 [2024-11-20 11:26:09.314190] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:26.226 [2024-11-20 11:26:09.314286] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.226 [2024-11-20 11:26:09.314619] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:26.226 [2024-11-20 11:26:09.314635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80058 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80058 ']' 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80058 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:26.226 11:26:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80058 00:16:26.484 killing process with pid 80058 00:16:26.484 11:26:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:26.484 11:26:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:26.484 11:26:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80058' 00:16:26.485 11:26:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80058 00:16:26.485 [2024-11-20 11:26:09.349815] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:26.485 11:26:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80058 00:16:26.743 [2024-11-20 11:26:09.685409] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:28.117 ************************************ 00:16:28.117 END TEST raid5f_state_function_test 00:16:28.117 ************************************ 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:28.117 00:16:28.117 real 0m11.511s 00:16:28.117 user 0m18.267s 00:16:28.117 sys 0m2.023s 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.117 11:26:10 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:16:28.117 11:26:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:28.117 11:26:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:28.117 11:26:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:28.117 ************************************ 00:16:28.117 START TEST raid5f_state_function_test_sb 00:16:28.117 ************************************ 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:28.117 Process raid pid: 80685 00:16:28.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80685 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80685' 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80685 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80685 ']' 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.117 11:26:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:28.117 [2024-11-20 11:26:11.080637] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:16:28.117 [2024-11-20 11:26:11.080881] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.375 [2024-11-20 11:26:11.259150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.375 [2024-11-20 11:26:11.388307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.633 [2024-11-20 11:26:11.623135] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.633 [2024-11-20 11:26:11.623170] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.946 11:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.946 11:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:28.946 11:26:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:28.946 11:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.946 11:26:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.946 [2024-11-20 11:26:11.996949] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:28.946 [2024-11-20 11:26:11.997120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:28.946 [2024-11-20 11:26:11.997139] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:28.946 [2024-11-20 11:26:11.997152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:28.946 [2024-11-20 11:26:11.997159] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:28.946 [2024-11-20 11:26:11.997170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:28.946 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.946 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:28.946 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.946 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.946 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.946 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.946 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:28.946 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.946 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.946 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.946 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.946 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.946 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.946 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.946 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.946 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.946 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.946 "name": "Existed_Raid", 00:16:28.946 "uuid": "5caeaa94-4e9d-46d8-9fdf-6ebb694d64ec", 00:16:28.946 "strip_size_kb": 64, 00:16:28.946 "state": "configuring", 00:16:28.946 "raid_level": "raid5f", 00:16:28.946 "superblock": true, 00:16:28.946 "num_base_bdevs": 3, 00:16:28.946 "num_base_bdevs_discovered": 0, 00:16:28.946 "num_base_bdevs_operational": 3, 00:16:28.946 "base_bdevs_list": [ 00:16:28.946 { 00:16:28.946 "name": "BaseBdev1", 00:16:28.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.946 "is_configured": false, 00:16:28.946 "data_offset": 0, 00:16:28.946 "data_size": 0 00:16:28.946 }, 00:16:28.946 { 00:16:28.946 "name": "BaseBdev2", 00:16:28.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.946 "is_configured": false, 00:16:28.946 "data_offset": 0, 00:16:28.946 "data_size": 0 00:16:28.946 }, 00:16:28.946 { 00:16:28.946 "name": "BaseBdev3", 00:16:28.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.946 "is_configured": false, 00:16:28.946 "data_offset": 0, 00:16:28.946 "data_size": 0 00:16:28.946 } 00:16:28.946 ] 00:16:28.946 }' 00:16:28.946 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.946 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.513 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:29.513 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.513 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.513 [2024-11-20 11:26:12.428172] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:29.513 [2024-11-20 11:26:12.428289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:29.513 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.513 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:29.513 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.513 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.513 [2024-11-20 11:26:12.440168] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:29.513 [2024-11-20 11:26:12.440271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:29.513 [2024-11-20 11:26:12.440326] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:29.513 [2024-11-20 11:26:12.440367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:29.513 [2024-11-20 11:26:12.440416] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:29.513 [2024-11-20 11:26:12.440442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:29.513 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.513 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:29.513 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.513 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.513 [2024-11-20 11:26:12.490229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:29.513 BaseBdev1 00:16:29.513 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.513 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:29.513 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:29.513 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:29.513 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:29.513 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:29.513 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:29.513 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:29.513 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.513 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.514 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.514 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:29.514 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.514 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.514 [ 00:16:29.514 { 00:16:29.514 "name": "BaseBdev1", 00:16:29.514 "aliases": [ 00:16:29.514 "ffb77d2a-2ce0-414b-95e8-6d32fb82da00" 00:16:29.514 ], 00:16:29.514 "product_name": "Malloc disk", 00:16:29.514 "block_size": 512, 00:16:29.514 "num_blocks": 65536, 00:16:29.514 "uuid": "ffb77d2a-2ce0-414b-95e8-6d32fb82da00", 00:16:29.514 "assigned_rate_limits": { 00:16:29.514 "rw_ios_per_sec": 0, 00:16:29.514 "rw_mbytes_per_sec": 0, 00:16:29.514 "r_mbytes_per_sec": 0, 00:16:29.514 "w_mbytes_per_sec": 0 00:16:29.514 }, 00:16:29.514 "claimed": true, 00:16:29.514 "claim_type": "exclusive_write", 00:16:29.514 "zoned": false, 00:16:29.514 "supported_io_types": { 00:16:29.514 "read": true, 00:16:29.514 "write": true, 00:16:29.514 "unmap": true, 00:16:29.514 "flush": true, 00:16:29.514 "reset": true, 00:16:29.514 "nvme_admin": false, 00:16:29.514 "nvme_io": false, 00:16:29.514 "nvme_io_md": false, 00:16:29.514 "write_zeroes": true, 00:16:29.514 "zcopy": true, 00:16:29.514 "get_zone_info": false, 00:16:29.514 "zone_management": false, 00:16:29.514 "zone_append": false, 00:16:29.514 "compare": false, 00:16:29.514 "compare_and_write": false, 00:16:29.514 "abort": true, 00:16:29.514 "seek_hole": false, 00:16:29.514 "seek_data": false, 00:16:29.514 "copy": true, 00:16:29.514 "nvme_iov_md": false 00:16:29.514 }, 00:16:29.514 "memory_domains": [ 00:16:29.514 { 00:16:29.514 "dma_device_id": "system", 00:16:29.514 "dma_device_type": 1 00:16:29.514 }, 00:16:29.514 { 00:16:29.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.514 "dma_device_type": 2 00:16:29.514 } 00:16:29.514 ], 00:16:29.514 "driver_specific": {} 00:16:29.514 } 00:16:29.514 ] 00:16:29.514 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.514 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:29.514 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:29.514 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.514 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.514 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.514 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.514 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.514 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.514 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.514 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.514 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.514 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.514 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.514 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.514 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.514 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.514 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.514 "name": "Existed_Raid", 00:16:29.514 "uuid": "06c6b588-a8a3-4cb6-a068-d6bfbb968116", 00:16:29.514 "strip_size_kb": 64, 00:16:29.514 "state": "configuring", 00:16:29.514 "raid_level": "raid5f", 00:16:29.514 "superblock": true, 00:16:29.514 "num_base_bdevs": 3, 00:16:29.514 "num_base_bdevs_discovered": 1, 00:16:29.514 "num_base_bdevs_operational": 3, 00:16:29.514 "base_bdevs_list": [ 00:16:29.514 { 00:16:29.514 "name": "BaseBdev1", 00:16:29.514 "uuid": "ffb77d2a-2ce0-414b-95e8-6d32fb82da00", 00:16:29.514 "is_configured": true, 00:16:29.514 "data_offset": 2048, 00:16:29.514 "data_size": 63488 00:16:29.514 }, 00:16:29.514 { 00:16:29.514 "name": "BaseBdev2", 00:16:29.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.514 "is_configured": false, 00:16:29.514 "data_offset": 0, 00:16:29.514 "data_size": 0 00:16:29.514 }, 00:16:29.514 { 00:16:29.514 "name": "BaseBdev3", 00:16:29.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.514 "is_configured": false, 00:16:29.514 "data_offset": 0, 00:16:29.514 "data_size": 0 00:16:29.514 } 00:16:29.514 ] 00:16:29.514 }' 00:16:29.514 11:26:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.514 11:26:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.081 [2024-11-20 11:26:13.009397] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.081 [2024-11-20 11:26:13.009538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.081 [2024-11-20 11:26:13.021433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.081 [2024-11-20 11:26:13.023345] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:30.081 [2024-11-20 11:26:13.023425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:30.081 [2024-11-20 11:26:13.023440] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:30.081 [2024-11-20 11:26:13.023460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.081 "name": "Existed_Raid", 00:16:30.081 "uuid": "e33110c6-45e2-4478-9327-6310dc78c610", 00:16:30.081 "strip_size_kb": 64, 00:16:30.081 "state": "configuring", 00:16:30.081 "raid_level": "raid5f", 00:16:30.081 "superblock": true, 00:16:30.081 "num_base_bdevs": 3, 00:16:30.081 "num_base_bdevs_discovered": 1, 00:16:30.081 "num_base_bdevs_operational": 3, 00:16:30.081 "base_bdevs_list": [ 00:16:30.081 { 00:16:30.081 "name": "BaseBdev1", 00:16:30.081 "uuid": "ffb77d2a-2ce0-414b-95e8-6d32fb82da00", 00:16:30.081 "is_configured": true, 00:16:30.081 "data_offset": 2048, 00:16:30.081 "data_size": 63488 00:16:30.081 }, 00:16:30.081 { 00:16:30.081 "name": "BaseBdev2", 00:16:30.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.081 "is_configured": false, 00:16:30.081 "data_offset": 0, 00:16:30.081 "data_size": 0 00:16:30.081 }, 00:16:30.081 { 00:16:30.081 "name": "BaseBdev3", 00:16:30.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.081 "is_configured": false, 00:16:30.081 "data_offset": 0, 00:16:30.081 "data_size": 0 00:16:30.081 } 00:16:30.081 ] 00:16:30.081 }' 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.081 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.650 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:30.650 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.650 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.650 [2024-11-20 11:26:13.527924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:30.650 BaseBdev2 00:16:30.650 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.650 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:30.650 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:30.650 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:30.650 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:30.650 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:30.650 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:30.650 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:30.650 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.650 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.650 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.650 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:30.650 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.650 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.650 [ 00:16:30.650 { 00:16:30.650 "name": "BaseBdev2", 00:16:30.650 "aliases": [ 00:16:30.651 "74489bbc-00f2-4d1f-acf8-fb43f2ff0099" 00:16:30.651 ], 00:16:30.651 "product_name": "Malloc disk", 00:16:30.651 "block_size": 512, 00:16:30.651 "num_blocks": 65536, 00:16:30.651 "uuid": "74489bbc-00f2-4d1f-acf8-fb43f2ff0099", 00:16:30.651 "assigned_rate_limits": { 00:16:30.651 "rw_ios_per_sec": 0, 00:16:30.651 "rw_mbytes_per_sec": 0, 00:16:30.651 "r_mbytes_per_sec": 0, 00:16:30.651 "w_mbytes_per_sec": 0 00:16:30.651 }, 00:16:30.651 "claimed": true, 00:16:30.651 "claim_type": "exclusive_write", 00:16:30.651 "zoned": false, 00:16:30.651 "supported_io_types": { 00:16:30.651 "read": true, 00:16:30.651 "write": true, 00:16:30.651 "unmap": true, 00:16:30.651 "flush": true, 00:16:30.651 "reset": true, 00:16:30.651 "nvme_admin": false, 00:16:30.651 "nvme_io": false, 00:16:30.651 "nvme_io_md": false, 00:16:30.651 "write_zeroes": true, 00:16:30.651 "zcopy": true, 00:16:30.651 "get_zone_info": false, 00:16:30.651 "zone_management": false, 00:16:30.651 "zone_append": false, 00:16:30.651 "compare": false, 00:16:30.651 "compare_and_write": false, 00:16:30.651 "abort": true, 00:16:30.651 "seek_hole": false, 00:16:30.651 "seek_data": false, 00:16:30.651 "copy": true, 00:16:30.651 "nvme_iov_md": false 00:16:30.651 }, 00:16:30.651 "memory_domains": [ 00:16:30.651 { 00:16:30.651 "dma_device_id": "system", 00:16:30.651 "dma_device_type": 1 00:16:30.651 }, 00:16:30.651 { 00:16:30.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.651 "dma_device_type": 2 00:16:30.651 } 00:16:30.651 ], 00:16:30.651 "driver_specific": {} 00:16:30.651 } 00:16:30.651 ] 00:16:30.651 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.651 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:30.651 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:30.651 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:30.651 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:30.651 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.651 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.651 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.651 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.651 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.651 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.651 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.651 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.651 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.651 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.651 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.651 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.651 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.651 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.651 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.651 "name": "Existed_Raid", 00:16:30.651 "uuid": "e33110c6-45e2-4478-9327-6310dc78c610", 00:16:30.651 "strip_size_kb": 64, 00:16:30.651 "state": "configuring", 00:16:30.651 "raid_level": "raid5f", 00:16:30.651 "superblock": true, 00:16:30.651 "num_base_bdevs": 3, 00:16:30.651 "num_base_bdevs_discovered": 2, 00:16:30.651 "num_base_bdevs_operational": 3, 00:16:30.651 "base_bdevs_list": [ 00:16:30.651 { 00:16:30.651 "name": "BaseBdev1", 00:16:30.651 "uuid": "ffb77d2a-2ce0-414b-95e8-6d32fb82da00", 00:16:30.651 "is_configured": true, 00:16:30.651 "data_offset": 2048, 00:16:30.651 "data_size": 63488 00:16:30.651 }, 00:16:30.651 { 00:16:30.651 "name": "BaseBdev2", 00:16:30.651 "uuid": "74489bbc-00f2-4d1f-acf8-fb43f2ff0099", 00:16:30.651 "is_configured": true, 00:16:30.651 "data_offset": 2048, 00:16:30.651 "data_size": 63488 00:16:30.651 }, 00:16:30.651 { 00:16:30.651 "name": "BaseBdev3", 00:16:30.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.651 "is_configured": false, 00:16:30.651 "data_offset": 0, 00:16:30.651 "data_size": 0 00:16:30.651 } 00:16:30.651 ] 00:16:30.651 }' 00:16:30.651 11:26:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.651 11:26:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.222 [2024-11-20 11:26:14.105805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:31.222 [2024-11-20 11:26:14.106109] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:31.222 [2024-11-20 11:26:14.106135] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:31.222 [2024-11-20 11:26:14.106426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:31.222 BaseBdev3 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.222 [2024-11-20 11:26:14.113288] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:31.222 [2024-11-20 11:26:14.113362] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:31.222 [2024-11-20 11:26:14.113756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.222 [ 00:16:31.222 { 00:16:31.222 "name": "BaseBdev3", 00:16:31.222 "aliases": [ 00:16:31.222 "c9f7d2b8-959e-46e0-ba66-61feb52e5666" 00:16:31.222 ], 00:16:31.222 "product_name": "Malloc disk", 00:16:31.222 "block_size": 512, 00:16:31.222 "num_blocks": 65536, 00:16:31.222 "uuid": "c9f7d2b8-959e-46e0-ba66-61feb52e5666", 00:16:31.222 "assigned_rate_limits": { 00:16:31.222 "rw_ios_per_sec": 0, 00:16:31.222 "rw_mbytes_per_sec": 0, 00:16:31.222 "r_mbytes_per_sec": 0, 00:16:31.222 "w_mbytes_per_sec": 0 00:16:31.222 }, 00:16:31.222 "claimed": true, 00:16:31.222 "claim_type": "exclusive_write", 00:16:31.222 "zoned": false, 00:16:31.222 "supported_io_types": { 00:16:31.222 "read": true, 00:16:31.222 "write": true, 00:16:31.222 "unmap": true, 00:16:31.222 "flush": true, 00:16:31.222 "reset": true, 00:16:31.222 "nvme_admin": false, 00:16:31.222 "nvme_io": false, 00:16:31.222 "nvme_io_md": false, 00:16:31.222 "write_zeroes": true, 00:16:31.222 "zcopy": true, 00:16:31.222 "get_zone_info": false, 00:16:31.222 "zone_management": false, 00:16:31.222 "zone_append": false, 00:16:31.222 "compare": false, 00:16:31.222 "compare_and_write": false, 00:16:31.222 "abort": true, 00:16:31.222 "seek_hole": false, 00:16:31.222 "seek_data": false, 00:16:31.222 "copy": true, 00:16:31.222 "nvme_iov_md": false 00:16:31.222 }, 00:16:31.222 "memory_domains": [ 00:16:31.222 { 00:16:31.222 "dma_device_id": "system", 00:16:31.222 "dma_device_type": 1 00:16:31.222 }, 00:16:31.222 { 00:16:31.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.222 "dma_device_type": 2 00:16:31.222 } 00:16:31.222 ], 00:16:31.222 "driver_specific": {} 00:16:31.222 } 00:16:31.222 ] 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.222 "name": "Existed_Raid", 00:16:31.222 "uuid": "e33110c6-45e2-4478-9327-6310dc78c610", 00:16:31.222 "strip_size_kb": 64, 00:16:31.222 "state": "online", 00:16:31.222 "raid_level": "raid5f", 00:16:31.222 "superblock": true, 00:16:31.222 "num_base_bdevs": 3, 00:16:31.222 "num_base_bdevs_discovered": 3, 00:16:31.222 "num_base_bdevs_operational": 3, 00:16:31.222 "base_bdevs_list": [ 00:16:31.222 { 00:16:31.222 "name": "BaseBdev1", 00:16:31.222 "uuid": "ffb77d2a-2ce0-414b-95e8-6d32fb82da00", 00:16:31.222 "is_configured": true, 00:16:31.222 "data_offset": 2048, 00:16:31.222 "data_size": 63488 00:16:31.222 }, 00:16:31.222 { 00:16:31.222 "name": "BaseBdev2", 00:16:31.222 "uuid": "74489bbc-00f2-4d1f-acf8-fb43f2ff0099", 00:16:31.222 "is_configured": true, 00:16:31.222 "data_offset": 2048, 00:16:31.222 "data_size": 63488 00:16:31.222 }, 00:16:31.222 { 00:16:31.222 "name": "BaseBdev3", 00:16:31.222 "uuid": "c9f7d2b8-959e-46e0-ba66-61feb52e5666", 00:16:31.222 "is_configured": true, 00:16:31.222 "data_offset": 2048, 00:16:31.222 "data_size": 63488 00:16:31.222 } 00:16:31.222 ] 00:16:31.222 }' 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.222 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.790 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:31.790 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:31.790 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:31.790 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:31.790 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:31.790 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:31.790 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:31.790 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.790 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:31.790 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.790 [2024-11-20 11:26:14.636725] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.790 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.790 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:31.790 "name": "Existed_Raid", 00:16:31.790 "aliases": [ 00:16:31.790 "e33110c6-45e2-4478-9327-6310dc78c610" 00:16:31.790 ], 00:16:31.790 "product_name": "Raid Volume", 00:16:31.790 "block_size": 512, 00:16:31.790 "num_blocks": 126976, 00:16:31.790 "uuid": "e33110c6-45e2-4478-9327-6310dc78c610", 00:16:31.790 "assigned_rate_limits": { 00:16:31.790 "rw_ios_per_sec": 0, 00:16:31.790 "rw_mbytes_per_sec": 0, 00:16:31.790 "r_mbytes_per_sec": 0, 00:16:31.790 "w_mbytes_per_sec": 0 00:16:31.790 }, 00:16:31.790 "claimed": false, 00:16:31.790 "zoned": false, 00:16:31.790 "supported_io_types": { 00:16:31.790 "read": true, 00:16:31.790 "write": true, 00:16:31.790 "unmap": false, 00:16:31.790 "flush": false, 00:16:31.790 "reset": true, 00:16:31.790 "nvme_admin": false, 00:16:31.790 "nvme_io": false, 00:16:31.790 "nvme_io_md": false, 00:16:31.790 "write_zeroes": true, 00:16:31.790 "zcopy": false, 00:16:31.790 "get_zone_info": false, 00:16:31.790 "zone_management": false, 00:16:31.790 "zone_append": false, 00:16:31.790 "compare": false, 00:16:31.790 "compare_and_write": false, 00:16:31.790 "abort": false, 00:16:31.790 "seek_hole": false, 00:16:31.790 "seek_data": false, 00:16:31.790 "copy": false, 00:16:31.790 "nvme_iov_md": false 00:16:31.790 }, 00:16:31.790 "driver_specific": { 00:16:31.790 "raid": { 00:16:31.790 "uuid": "e33110c6-45e2-4478-9327-6310dc78c610", 00:16:31.790 "strip_size_kb": 64, 00:16:31.790 "state": "online", 00:16:31.790 "raid_level": "raid5f", 00:16:31.790 "superblock": true, 00:16:31.790 "num_base_bdevs": 3, 00:16:31.790 "num_base_bdevs_discovered": 3, 00:16:31.790 "num_base_bdevs_operational": 3, 00:16:31.790 "base_bdevs_list": [ 00:16:31.790 { 00:16:31.790 "name": "BaseBdev1", 00:16:31.790 "uuid": "ffb77d2a-2ce0-414b-95e8-6d32fb82da00", 00:16:31.790 "is_configured": true, 00:16:31.790 "data_offset": 2048, 00:16:31.790 "data_size": 63488 00:16:31.790 }, 00:16:31.790 { 00:16:31.790 "name": "BaseBdev2", 00:16:31.790 "uuid": "74489bbc-00f2-4d1f-acf8-fb43f2ff0099", 00:16:31.790 "is_configured": true, 00:16:31.790 "data_offset": 2048, 00:16:31.790 "data_size": 63488 00:16:31.790 }, 00:16:31.790 { 00:16:31.790 "name": "BaseBdev3", 00:16:31.790 "uuid": "c9f7d2b8-959e-46e0-ba66-61feb52e5666", 00:16:31.790 "is_configured": true, 00:16:31.790 "data_offset": 2048, 00:16:31.790 "data_size": 63488 00:16:31.790 } 00:16:31.791 ] 00:16:31.791 } 00:16:31.791 } 00:16:31.791 }' 00:16:31.791 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:31.791 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:31.791 BaseBdev2 00:16:31.791 BaseBdev3' 00:16:31.791 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.791 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:31.791 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.791 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:31.791 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.791 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.791 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.791 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.791 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:31.791 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:31.791 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.791 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:31.791 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.791 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.791 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.791 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.791 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:31.791 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:31.791 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.791 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:31.791 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.791 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.791 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.049 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.049 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.049 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.049 11:26:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:32.049 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.049 11:26:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.049 [2024-11-20 11:26:14.948018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:32.049 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.049 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:32.049 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:32.049 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:32.049 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:32.049 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:32.049 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:32.049 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.049 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.049 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.049 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.049 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:32.049 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.049 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.049 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.049 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.049 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.049 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.049 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.049 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.049 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.049 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.049 "name": "Existed_Raid", 00:16:32.049 "uuid": "e33110c6-45e2-4478-9327-6310dc78c610", 00:16:32.049 "strip_size_kb": 64, 00:16:32.049 "state": "online", 00:16:32.049 "raid_level": "raid5f", 00:16:32.049 "superblock": true, 00:16:32.049 "num_base_bdevs": 3, 00:16:32.049 "num_base_bdevs_discovered": 2, 00:16:32.049 "num_base_bdevs_operational": 2, 00:16:32.049 "base_bdevs_list": [ 00:16:32.049 { 00:16:32.049 "name": null, 00:16:32.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.049 "is_configured": false, 00:16:32.049 "data_offset": 0, 00:16:32.049 "data_size": 63488 00:16:32.049 }, 00:16:32.049 { 00:16:32.049 "name": "BaseBdev2", 00:16:32.049 "uuid": "74489bbc-00f2-4d1f-acf8-fb43f2ff0099", 00:16:32.049 "is_configured": true, 00:16:32.049 "data_offset": 2048, 00:16:32.049 "data_size": 63488 00:16:32.049 }, 00:16:32.049 { 00:16:32.049 "name": "BaseBdev3", 00:16:32.049 "uuid": "c9f7d2b8-959e-46e0-ba66-61feb52e5666", 00:16:32.049 "is_configured": true, 00:16:32.049 "data_offset": 2048, 00:16:32.049 "data_size": 63488 00:16:32.049 } 00:16:32.049 ] 00:16:32.049 }' 00:16:32.049 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.049 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.615 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:32.615 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:32.615 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.615 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.615 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.615 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:32.615 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.615 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:32.615 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:32.615 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:32.615 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.615 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.615 [2024-11-20 11:26:15.568486] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:32.615 [2024-11-20 11:26:15.568738] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.615 [2024-11-20 11:26:15.677380] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.615 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.615 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:32.615 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:32.615 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.615 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:32.615 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.615 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.615 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.874 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:32.874 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:32.874 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:32.874 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.874 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.874 [2024-11-20 11:26:15.741303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:32.875 [2024-11-20 11:26:15.741364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.875 BaseBdev2 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.875 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.875 [ 00:16:32.875 { 00:16:32.875 "name": "BaseBdev2", 00:16:32.875 "aliases": [ 00:16:32.875 "7f035bdf-3dfd-4b2a-a72a-b6ce23291d47" 00:16:32.875 ], 00:16:32.875 "product_name": "Malloc disk", 00:16:32.875 "block_size": 512, 00:16:32.875 "num_blocks": 65536, 00:16:32.875 "uuid": "7f035bdf-3dfd-4b2a-a72a-b6ce23291d47", 00:16:32.875 "assigned_rate_limits": { 00:16:32.875 "rw_ios_per_sec": 0, 00:16:32.875 "rw_mbytes_per_sec": 0, 00:16:32.875 "r_mbytes_per_sec": 0, 00:16:32.875 "w_mbytes_per_sec": 0 00:16:32.875 }, 00:16:32.875 "claimed": false, 00:16:32.875 "zoned": false, 00:16:32.875 "supported_io_types": { 00:16:32.875 "read": true, 00:16:32.875 "write": true, 00:16:32.875 "unmap": true, 00:16:32.875 "flush": true, 00:16:32.875 "reset": true, 00:16:32.875 "nvme_admin": false, 00:16:32.875 "nvme_io": false, 00:16:32.875 "nvme_io_md": false, 00:16:32.875 "write_zeroes": true, 00:16:32.875 "zcopy": true, 00:16:32.875 "get_zone_info": false, 00:16:32.875 "zone_management": false, 00:16:32.875 "zone_append": false, 00:16:32.875 "compare": false, 00:16:33.133 "compare_and_write": false, 00:16:33.133 "abort": true, 00:16:33.133 "seek_hole": false, 00:16:33.133 "seek_data": false, 00:16:33.133 "copy": true, 00:16:33.133 "nvme_iov_md": false 00:16:33.133 }, 00:16:33.133 "memory_domains": [ 00:16:33.133 { 00:16:33.133 "dma_device_id": "system", 00:16:33.133 "dma_device_type": 1 00:16:33.133 }, 00:16:33.133 { 00:16:33.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.133 "dma_device_type": 2 00:16:33.133 } 00:16:33.133 ], 00:16:33.133 "driver_specific": {} 00:16:33.133 } 00:16:33.133 ] 00:16:33.133 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.133 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:33.133 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:33.133 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:33.133 11:26:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:33.133 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.133 11:26:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.133 BaseBdev3 00:16:33.133 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.133 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:33.133 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:33.133 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:33.133 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:33.133 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:33.133 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:33.133 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:33.133 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.133 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.133 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.133 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:33.133 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.133 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.133 [ 00:16:33.133 { 00:16:33.133 "name": "BaseBdev3", 00:16:33.133 "aliases": [ 00:16:33.133 "a0c8d547-968c-44ba-87ed-fd7ccb15bd40" 00:16:33.133 ], 00:16:33.133 "product_name": "Malloc disk", 00:16:33.133 "block_size": 512, 00:16:33.133 "num_blocks": 65536, 00:16:33.133 "uuid": "a0c8d547-968c-44ba-87ed-fd7ccb15bd40", 00:16:33.133 "assigned_rate_limits": { 00:16:33.133 "rw_ios_per_sec": 0, 00:16:33.133 "rw_mbytes_per_sec": 0, 00:16:33.133 "r_mbytes_per_sec": 0, 00:16:33.133 "w_mbytes_per_sec": 0 00:16:33.133 }, 00:16:33.133 "claimed": false, 00:16:33.133 "zoned": false, 00:16:33.133 "supported_io_types": { 00:16:33.133 "read": true, 00:16:33.133 "write": true, 00:16:33.133 "unmap": true, 00:16:33.133 "flush": true, 00:16:33.133 "reset": true, 00:16:33.133 "nvme_admin": false, 00:16:33.133 "nvme_io": false, 00:16:33.133 "nvme_io_md": false, 00:16:33.133 "write_zeroes": true, 00:16:33.133 "zcopy": true, 00:16:33.133 "get_zone_info": false, 00:16:33.133 "zone_management": false, 00:16:33.134 "zone_append": false, 00:16:33.134 "compare": false, 00:16:33.134 "compare_and_write": false, 00:16:33.134 "abort": true, 00:16:33.134 "seek_hole": false, 00:16:33.134 "seek_data": false, 00:16:33.134 "copy": true, 00:16:33.134 "nvme_iov_md": false 00:16:33.134 }, 00:16:33.134 "memory_domains": [ 00:16:33.134 { 00:16:33.134 "dma_device_id": "system", 00:16:33.134 "dma_device_type": 1 00:16:33.134 }, 00:16:33.134 { 00:16:33.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.134 "dma_device_type": 2 00:16:33.134 } 00:16:33.134 ], 00:16:33.134 "driver_specific": {} 00:16:33.134 } 00:16:33.134 ] 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.134 [2024-11-20 11:26:16.088888] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:33.134 [2024-11-20 11:26:16.089020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:33.134 [2024-11-20 11:26:16.089086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:33.134 [2024-11-20 11:26:16.091259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.134 "name": "Existed_Raid", 00:16:33.134 "uuid": "87daca5c-9353-48e0-85a2-d2dd77f4ddc8", 00:16:33.134 "strip_size_kb": 64, 00:16:33.134 "state": "configuring", 00:16:33.134 "raid_level": "raid5f", 00:16:33.134 "superblock": true, 00:16:33.134 "num_base_bdevs": 3, 00:16:33.134 "num_base_bdevs_discovered": 2, 00:16:33.134 "num_base_bdevs_operational": 3, 00:16:33.134 "base_bdevs_list": [ 00:16:33.134 { 00:16:33.134 "name": "BaseBdev1", 00:16:33.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.134 "is_configured": false, 00:16:33.134 "data_offset": 0, 00:16:33.134 "data_size": 0 00:16:33.134 }, 00:16:33.134 { 00:16:33.134 "name": "BaseBdev2", 00:16:33.134 "uuid": "7f035bdf-3dfd-4b2a-a72a-b6ce23291d47", 00:16:33.134 "is_configured": true, 00:16:33.134 "data_offset": 2048, 00:16:33.134 "data_size": 63488 00:16:33.134 }, 00:16:33.134 { 00:16:33.134 "name": "BaseBdev3", 00:16:33.134 "uuid": "a0c8d547-968c-44ba-87ed-fd7ccb15bd40", 00:16:33.134 "is_configured": true, 00:16:33.134 "data_offset": 2048, 00:16:33.134 "data_size": 63488 00:16:33.134 } 00:16:33.134 ] 00:16:33.134 }' 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.134 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.702 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:33.702 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.702 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.702 [2024-11-20 11:26:16.540115] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:33.702 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.703 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:33.703 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.703 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.703 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.703 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.703 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.703 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.703 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.703 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.703 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.703 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.703 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.703 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.703 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.703 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.703 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.703 "name": "Existed_Raid", 00:16:33.703 "uuid": "87daca5c-9353-48e0-85a2-d2dd77f4ddc8", 00:16:33.703 "strip_size_kb": 64, 00:16:33.703 "state": "configuring", 00:16:33.703 "raid_level": "raid5f", 00:16:33.703 "superblock": true, 00:16:33.703 "num_base_bdevs": 3, 00:16:33.703 "num_base_bdevs_discovered": 1, 00:16:33.703 "num_base_bdevs_operational": 3, 00:16:33.703 "base_bdevs_list": [ 00:16:33.703 { 00:16:33.703 "name": "BaseBdev1", 00:16:33.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.703 "is_configured": false, 00:16:33.703 "data_offset": 0, 00:16:33.703 "data_size": 0 00:16:33.703 }, 00:16:33.703 { 00:16:33.703 "name": null, 00:16:33.703 "uuid": "7f035bdf-3dfd-4b2a-a72a-b6ce23291d47", 00:16:33.703 "is_configured": false, 00:16:33.703 "data_offset": 0, 00:16:33.703 "data_size": 63488 00:16:33.703 }, 00:16:33.703 { 00:16:33.703 "name": "BaseBdev3", 00:16:33.703 "uuid": "a0c8d547-968c-44ba-87ed-fd7ccb15bd40", 00:16:33.703 "is_configured": true, 00:16:33.703 "data_offset": 2048, 00:16:33.703 "data_size": 63488 00:16:33.703 } 00:16:33.703 ] 00:16:33.703 }' 00:16:33.703 11:26:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.703 11:26:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.961 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:33.961 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.961 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.961 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.961 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.961 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:33.961 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:33.961 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.961 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.221 [2024-11-20 11:26:17.103095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:34.221 BaseBdev1 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.221 [ 00:16:34.221 { 00:16:34.221 "name": "BaseBdev1", 00:16:34.221 "aliases": [ 00:16:34.221 "2eb4a653-8703-489b-9ec8-b9cab3119c9f" 00:16:34.221 ], 00:16:34.221 "product_name": "Malloc disk", 00:16:34.221 "block_size": 512, 00:16:34.221 "num_blocks": 65536, 00:16:34.221 "uuid": "2eb4a653-8703-489b-9ec8-b9cab3119c9f", 00:16:34.221 "assigned_rate_limits": { 00:16:34.221 "rw_ios_per_sec": 0, 00:16:34.221 "rw_mbytes_per_sec": 0, 00:16:34.221 "r_mbytes_per_sec": 0, 00:16:34.221 "w_mbytes_per_sec": 0 00:16:34.221 }, 00:16:34.221 "claimed": true, 00:16:34.221 "claim_type": "exclusive_write", 00:16:34.221 "zoned": false, 00:16:34.221 "supported_io_types": { 00:16:34.221 "read": true, 00:16:34.221 "write": true, 00:16:34.221 "unmap": true, 00:16:34.221 "flush": true, 00:16:34.221 "reset": true, 00:16:34.221 "nvme_admin": false, 00:16:34.221 "nvme_io": false, 00:16:34.221 "nvme_io_md": false, 00:16:34.221 "write_zeroes": true, 00:16:34.221 "zcopy": true, 00:16:34.221 "get_zone_info": false, 00:16:34.221 "zone_management": false, 00:16:34.221 "zone_append": false, 00:16:34.221 "compare": false, 00:16:34.221 "compare_and_write": false, 00:16:34.221 "abort": true, 00:16:34.221 "seek_hole": false, 00:16:34.221 "seek_data": false, 00:16:34.221 "copy": true, 00:16:34.221 "nvme_iov_md": false 00:16:34.221 }, 00:16:34.221 "memory_domains": [ 00:16:34.221 { 00:16:34.221 "dma_device_id": "system", 00:16:34.221 "dma_device_type": 1 00:16:34.221 }, 00:16:34.221 { 00:16:34.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.221 "dma_device_type": 2 00:16:34.221 } 00:16:34.221 ], 00:16:34.221 "driver_specific": {} 00:16:34.221 } 00:16:34.221 ] 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.221 "name": "Existed_Raid", 00:16:34.221 "uuid": "87daca5c-9353-48e0-85a2-d2dd77f4ddc8", 00:16:34.221 "strip_size_kb": 64, 00:16:34.221 "state": "configuring", 00:16:34.221 "raid_level": "raid5f", 00:16:34.221 "superblock": true, 00:16:34.221 "num_base_bdevs": 3, 00:16:34.221 "num_base_bdevs_discovered": 2, 00:16:34.221 "num_base_bdevs_operational": 3, 00:16:34.221 "base_bdevs_list": [ 00:16:34.221 { 00:16:34.221 "name": "BaseBdev1", 00:16:34.221 "uuid": "2eb4a653-8703-489b-9ec8-b9cab3119c9f", 00:16:34.221 "is_configured": true, 00:16:34.221 "data_offset": 2048, 00:16:34.221 "data_size": 63488 00:16:34.221 }, 00:16:34.221 { 00:16:34.221 "name": null, 00:16:34.221 "uuid": "7f035bdf-3dfd-4b2a-a72a-b6ce23291d47", 00:16:34.221 "is_configured": false, 00:16:34.221 "data_offset": 0, 00:16:34.221 "data_size": 63488 00:16:34.221 }, 00:16:34.221 { 00:16:34.221 "name": "BaseBdev3", 00:16:34.221 "uuid": "a0c8d547-968c-44ba-87ed-fd7ccb15bd40", 00:16:34.221 "is_configured": true, 00:16:34.221 "data_offset": 2048, 00:16:34.221 "data_size": 63488 00:16:34.221 } 00:16:34.221 ] 00:16:34.221 }' 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.221 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.479 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:34.479 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.479 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.479 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.479 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.737 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:34.737 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:34.737 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.737 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.737 [2024-11-20 11:26:17.622352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:34.737 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.737 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:34.737 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.737 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.737 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.737 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.737 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.737 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.737 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.737 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.737 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.737 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.737 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.737 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.737 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.737 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.737 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.737 "name": "Existed_Raid", 00:16:34.737 "uuid": "87daca5c-9353-48e0-85a2-d2dd77f4ddc8", 00:16:34.737 "strip_size_kb": 64, 00:16:34.737 "state": "configuring", 00:16:34.737 "raid_level": "raid5f", 00:16:34.737 "superblock": true, 00:16:34.737 "num_base_bdevs": 3, 00:16:34.737 "num_base_bdevs_discovered": 1, 00:16:34.737 "num_base_bdevs_operational": 3, 00:16:34.737 "base_bdevs_list": [ 00:16:34.737 { 00:16:34.737 "name": "BaseBdev1", 00:16:34.737 "uuid": "2eb4a653-8703-489b-9ec8-b9cab3119c9f", 00:16:34.738 "is_configured": true, 00:16:34.738 "data_offset": 2048, 00:16:34.738 "data_size": 63488 00:16:34.738 }, 00:16:34.738 { 00:16:34.738 "name": null, 00:16:34.738 "uuid": "7f035bdf-3dfd-4b2a-a72a-b6ce23291d47", 00:16:34.738 "is_configured": false, 00:16:34.738 "data_offset": 0, 00:16:34.738 "data_size": 63488 00:16:34.738 }, 00:16:34.738 { 00:16:34.738 "name": null, 00:16:34.738 "uuid": "a0c8d547-968c-44ba-87ed-fd7ccb15bd40", 00:16:34.738 "is_configured": false, 00:16:34.738 "data_offset": 0, 00:16:34.738 "data_size": 63488 00:16:34.738 } 00:16:34.738 ] 00:16:34.738 }' 00:16:34.738 11:26:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.738 11:26:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.032 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.032 11:26:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.032 11:26:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.032 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:35.032 11:26:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.293 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:35.293 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:35.293 11:26:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.293 11:26:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.293 [2024-11-20 11:26:18.141547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:35.293 11:26:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.293 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:35.293 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.293 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.293 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.294 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.294 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:35.294 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.294 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.294 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.294 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.294 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.294 11:26:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.294 11:26:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.294 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.294 11:26:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.294 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.294 "name": "Existed_Raid", 00:16:35.294 "uuid": "87daca5c-9353-48e0-85a2-d2dd77f4ddc8", 00:16:35.294 "strip_size_kb": 64, 00:16:35.294 "state": "configuring", 00:16:35.294 "raid_level": "raid5f", 00:16:35.294 "superblock": true, 00:16:35.294 "num_base_bdevs": 3, 00:16:35.294 "num_base_bdevs_discovered": 2, 00:16:35.294 "num_base_bdevs_operational": 3, 00:16:35.294 "base_bdevs_list": [ 00:16:35.294 { 00:16:35.294 "name": "BaseBdev1", 00:16:35.294 "uuid": "2eb4a653-8703-489b-9ec8-b9cab3119c9f", 00:16:35.294 "is_configured": true, 00:16:35.294 "data_offset": 2048, 00:16:35.294 "data_size": 63488 00:16:35.294 }, 00:16:35.294 { 00:16:35.294 "name": null, 00:16:35.294 "uuid": "7f035bdf-3dfd-4b2a-a72a-b6ce23291d47", 00:16:35.294 "is_configured": false, 00:16:35.294 "data_offset": 0, 00:16:35.294 "data_size": 63488 00:16:35.294 }, 00:16:35.294 { 00:16:35.294 "name": "BaseBdev3", 00:16:35.294 "uuid": "a0c8d547-968c-44ba-87ed-fd7ccb15bd40", 00:16:35.294 "is_configured": true, 00:16:35.294 "data_offset": 2048, 00:16:35.294 "data_size": 63488 00:16:35.294 } 00:16:35.294 ] 00:16:35.294 }' 00:16:35.294 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.294 11:26:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.552 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:35.552 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.552 11:26:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.552 11:26:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.552 11:26:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.552 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:35.552 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:35.552 11:26:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.552 11:26:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.552 [2024-11-20 11:26:18.632742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:35.810 11:26:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.810 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:35.810 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.810 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.810 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.810 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.810 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:35.810 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.810 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.810 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.810 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.810 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.810 11:26:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.810 11:26:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.810 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.810 11:26:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.810 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.810 "name": "Existed_Raid", 00:16:35.810 "uuid": "87daca5c-9353-48e0-85a2-d2dd77f4ddc8", 00:16:35.810 "strip_size_kb": 64, 00:16:35.810 "state": "configuring", 00:16:35.810 "raid_level": "raid5f", 00:16:35.810 "superblock": true, 00:16:35.810 "num_base_bdevs": 3, 00:16:35.810 "num_base_bdevs_discovered": 1, 00:16:35.810 "num_base_bdevs_operational": 3, 00:16:35.810 "base_bdevs_list": [ 00:16:35.810 { 00:16:35.810 "name": null, 00:16:35.810 "uuid": "2eb4a653-8703-489b-9ec8-b9cab3119c9f", 00:16:35.810 "is_configured": false, 00:16:35.810 "data_offset": 0, 00:16:35.810 "data_size": 63488 00:16:35.810 }, 00:16:35.810 { 00:16:35.810 "name": null, 00:16:35.810 "uuid": "7f035bdf-3dfd-4b2a-a72a-b6ce23291d47", 00:16:35.810 "is_configured": false, 00:16:35.810 "data_offset": 0, 00:16:35.810 "data_size": 63488 00:16:35.810 }, 00:16:35.810 { 00:16:35.810 "name": "BaseBdev3", 00:16:35.810 "uuid": "a0c8d547-968c-44ba-87ed-fd7ccb15bd40", 00:16:35.810 "is_configured": true, 00:16:35.810 "data_offset": 2048, 00:16:35.810 "data_size": 63488 00:16:35.810 } 00:16:35.810 ] 00:16:35.810 }' 00:16:35.810 11:26:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.810 11:26:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.068 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:36.068 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.068 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.068 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.327 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.327 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:36.327 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:36.327 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.327 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.327 [2024-11-20 11:26:19.212092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:36.327 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.327 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:36.327 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.327 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.327 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.327 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.327 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.327 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.327 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.327 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.327 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.327 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.327 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.327 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.327 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.327 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.327 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.327 "name": "Existed_Raid", 00:16:36.327 "uuid": "87daca5c-9353-48e0-85a2-d2dd77f4ddc8", 00:16:36.327 "strip_size_kb": 64, 00:16:36.327 "state": "configuring", 00:16:36.327 "raid_level": "raid5f", 00:16:36.327 "superblock": true, 00:16:36.327 "num_base_bdevs": 3, 00:16:36.327 "num_base_bdevs_discovered": 2, 00:16:36.327 "num_base_bdevs_operational": 3, 00:16:36.327 "base_bdevs_list": [ 00:16:36.327 { 00:16:36.327 "name": null, 00:16:36.327 "uuid": "2eb4a653-8703-489b-9ec8-b9cab3119c9f", 00:16:36.327 "is_configured": false, 00:16:36.327 "data_offset": 0, 00:16:36.327 "data_size": 63488 00:16:36.327 }, 00:16:36.327 { 00:16:36.327 "name": "BaseBdev2", 00:16:36.327 "uuid": "7f035bdf-3dfd-4b2a-a72a-b6ce23291d47", 00:16:36.327 "is_configured": true, 00:16:36.327 "data_offset": 2048, 00:16:36.327 "data_size": 63488 00:16:36.327 }, 00:16:36.327 { 00:16:36.327 "name": "BaseBdev3", 00:16:36.327 "uuid": "a0c8d547-968c-44ba-87ed-fd7ccb15bd40", 00:16:36.327 "is_configured": true, 00:16:36.327 "data_offset": 2048, 00:16:36.327 "data_size": 63488 00:16:36.327 } 00:16:36.327 ] 00:16:36.327 }' 00:16:36.327 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.327 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.585 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.585 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:36.585 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.585 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.843 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.843 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:36.843 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.843 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:36.843 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.843 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.843 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.843 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2eb4a653-8703-489b-9ec8-b9cab3119c9f 00:16:36.843 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.843 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.843 [2024-11-20 11:26:19.837978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:36.843 [2024-11-20 11:26:19.838234] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:36.843 [2024-11-20 11:26:19.838256] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:36.843 [2024-11-20 11:26:19.838557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:36.844 NewBaseBdev 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.844 [2024-11-20 11:26:19.845093] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:36.844 [2024-11-20 11:26:19.845118] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:36.844 [2024-11-20 11:26:19.845329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.844 [ 00:16:36.844 { 00:16:36.844 "name": "NewBaseBdev", 00:16:36.844 "aliases": [ 00:16:36.844 "2eb4a653-8703-489b-9ec8-b9cab3119c9f" 00:16:36.844 ], 00:16:36.844 "product_name": "Malloc disk", 00:16:36.844 "block_size": 512, 00:16:36.844 "num_blocks": 65536, 00:16:36.844 "uuid": "2eb4a653-8703-489b-9ec8-b9cab3119c9f", 00:16:36.844 "assigned_rate_limits": { 00:16:36.844 "rw_ios_per_sec": 0, 00:16:36.844 "rw_mbytes_per_sec": 0, 00:16:36.844 "r_mbytes_per_sec": 0, 00:16:36.844 "w_mbytes_per_sec": 0 00:16:36.844 }, 00:16:36.844 "claimed": true, 00:16:36.844 "claim_type": "exclusive_write", 00:16:36.844 "zoned": false, 00:16:36.844 "supported_io_types": { 00:16:36.844 "read": true, 00:16:36.844 "write": true, 00:16:36.844 "unmap": true, 00:16:36.844 "flush": true, 00:16:36.844 "reset": true, 00:16:36.844 "nvme_admin": false, 00:16:36.844 "nvme_io": false, 00:16:36.844 "nvme_io_md": false, 00:16:36.844 "write_zeroes": true, 00:16:36.844 "zcopy": true, 00:16:36.844 "get_zone_info": false, 00:16:36.844 "zone_management": false, 00:16:36.844 "zone_append": false, 00:16:36.844 "compare": false, 00:16:36.844 "compare_and_write": false, 00:16:36.844 "abort": true, 00:16:36.844 "seek_hole": false, 00:16:36.844 "seek_data": false, 00:16:36.844 "copy": true, 00:16:36.844 "nvme_iov_md": false 00:16:36.844 }, 00:16:36.844 "memory_domains": [ 00:16:36.844 { 00:16:36.844 "dma_device_id": "system", 00:16:36.844 "dma_device_type": 1 00:16:36.844 }, 00:16:36.844 { 00:16:36.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.844 "dma_device_type": 2 00:16:36.844 } 00:16:36.844 ], 00:16:36.844 "driver_specific": {} 00:16:36.844 } 00:16:36.844 ] 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.844 "name": "Existed_Raid", 00:16:36.844 "uuid": "87daca5c-9353-48e0-85a2-d2dd77f4ddc8", 00:16:36.844 "strip_size_kb": 64, 00:16:36.844 "state": "online", 00:16:36.844 "raid_level": "raid5f", 00:16:36.844 "superblock": true, 00:16:36.844 "num_base_bdevs": 3, 00:16:36.844 "num_base_bdevs_discovered": 3, 00:16:36.844 "num_base_bdevs_operational": 3, 00:16:36.844 "base_bdevs_list": [ 00:16:36.844 { 00:16:36.844 "name": "NewBaseBdev", 00:16:36.844 "uuid": "2eb4a653-8703-489b-9ec8-b9cab3119c9f", 00:16:36.844 "is_configured": true, 00:16:36.844 "data_offset": 2048, 00:16:36.844 "data_size": 63488 00:16:36.844 }, 00:16:36.844 { 00:16:36.844 "name": "BaseBdev2", 00:16:36.844 "uuid": "7f035bdf-3dfd-4b2a-a72a-b6ce23291d47", 00:16:36.844 "is_configured": true, 00:16:36.844 "data_offset": 2048, 00:16:36.844 "data_size": 63488 00:16:36.844 }, 00:16:36.844 { 00:16:36.844 "name": "BaseBdev3", 00:16:36.844 "uuid": "a0c8d547-968c-44ba-87ed-fd7ccb15bd40", 00:16:36.844 "is_configured": true, 00:16:36.844 "data_offset": 2048, 00:16:36.844 "data_size": 63488 00:16:36.844 } 00:16:36.844 ] 00:16:36.844 }' 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.844 11:26:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.411 [2024-11-20 11:26:20.316319] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:37.411 "name": "Existed_Raid", 00:16:37.411 "aliases": [ 00:16:37.411 "87daca5c-9353-48e0-85a2-d2dd77f4ddc8" 00:16:37.411 ], 00:16:37.411 "product_name": "Raid Volume", 00:16:37.411 "block_size": 512, 00:16:37.411 "num_blocks": 126976, 00:16:37.411 "uuid": "87daca5c-9353-48e0-85a2-d2dd77f4ddc8", 00:16:37.411 "assigned_rate_limits": { 00:16:37.411 "rw_ios_per_sec": 0, 00:16:37.411 "rw_mbytes_per_sec": 0, 00:16:37.411 "r_mbytes_per_sec": 0, 00:16:37.411 "w_mbytes_per_sec": 0 00:16:37.411 }, 00:16:37.411 "claimed": false, 00:16:37.411 "zoned": false, 00:16:37.411 "supported_io_types": { 00:16:37.411 "read": true, 00:16:37.411 "write": true, 00:16:37.411 "unmap": false, 00:16:37.411 "flush": false, 00:16:37.411 "reset": true, 00:16:37.411 "nvme_admin": false, 00:16:37.411 "nvme_io": false, 00:16:37.411 "nvme_io_md": false, 00:16:37.411 "write_zeroes": true, 00:16:37.411 "zcopy": false, 00:16:37.411 "get_zone_info": false, 00:16:37.411 "zone_management": false, 00:16:37.411 "zone_append": false, 00:16:37.411 "compare": false, 00:16:37.411 "compare_and_write": false, 00:16:37.411 "abort": false, 00:16:37.411 "seek_hole": false, 00:16:37.411 "seek_data": false, 00:16:37.411 "copy": false, 00:16:37.411 "nvme_iov_md": false 00:16:37.411 }, 00:16:37.411 "driver_specific": { 00:16:37.411 "raid": { 00:16:37.411 "uuid": "87daca5c-9353-48e0-85a2-d2dd77f4ddc8", 00:16:37.411 "strip_size_kb": 64, 00:16:37.411 "state": "online", 00:16:37.411 "raid_level": "raid5f", 00:16:37.411 "superblock": true, 00:16:37.411 "num_base_bdevs": 3, 00:16:37.411 "num_base_bdevs_discovered": 3, 00:16:37.411 "num_base_bdevs_operational": 3, 00:16:37.411 "base_bdevs_list": [ 00:16:37.411 { 00:16:37.411 "name": "NewBaseBdev", 00:16:37.411 "uuid": "2eb4a653-8703-489b-9ec8-b9cab3119c9f", 00:16:37.411 "is_configured": true, 00:16:37.411 "data_offset": 2048, 00:16:37.411 "data_size": 63488 00:16:37.411 }, 00:16:37.411 { 00:16:37.411 "name": "BaseBdev2", 00:16:37.411 "uuid": "7f035bdf-3dfd-4b2a-a72a-b6ce23291d47", 00:16:37.411 "is_configured": true, 00:16:37.411 "data_offset": 2048, 00:16:37.411 "data_size": 63488 00:16:37.411 }, 00:16:37.411 { 00:16:37.411 "name": "BaseBdev3", 00:16:37.411 "uuid": "a0c8d547-968c-44ba-87ed-fd7ccb15bd40", 00:16:37.411 "is_configured": true, 00:16:37.411 "data_offset": 2048, 00:16:37.411 "data_size": 63488 00:16:37.411 } 00:16:37.411 ] 00:16:37.411 } 00:16:37.411 } 00:16:37.411 }' 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:37.411 BaseBdev2 00:16:37.411 BaseBdev3' 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.411 11:26:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.670 [2024-11-20 11:26:20.619636] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:37.670 [2024-11-20 11:26:20.619674] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:37.670 [2024-11-20 11:26:20.619775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.670 [2024-11-20 11:26:20.620105] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:37.670 [2024-11-20 11:26:20.620129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80685 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80685 ']' 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80685 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80685 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:37.670 killing process with pid 80685 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80685' 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80685 00:16:37.670 [2024-11-20 11:26:20.668088] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:37.670 11:26:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80685 00:16:37.928 [2024-11-20 11:26:21.007890] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:39.311 11:26:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:39.311 00:16:39.311 real 0m11.282s 00:16:39.311 user 0m17.813s 00:16:39.311 sys 0m2.043s 00:16:39.311 11:26:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.311 11:26:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.311 ************************************ 00:16:39.311 END TEST raid5f_state_function_test_sb 00:16:39.311 ************************************ 00:16:39.311 11:26:22 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:16:39.311 11:26:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:39.311 11:26:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:39.311 11:26:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:39.311 ************************************ 00:16:39.311 START TEST raid5f_superblock_test 00:16:39.311 ************************************ 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81311 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81311 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81311 ']' 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:39.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:39.311 11:26:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.570 [2024-11-20 11:26:22.428758] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:16:39.570 [2024-11-20 11:26:22.429386] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81311 ] 00:16:39.570 [2024-11-20 11:26:22.587182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.829 [2024-11-20 11:26:22.718775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.829 [2024-11-20 11:26:22.941284] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:39.829 [2024-11-20 11:26:22.941322] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.394 malloc1 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.394 [2024-11-20 11:26:23.362328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:40.394 [2024-11-20 11:26:23.362407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.394 [2024-11-20 11:26:23.362432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:40.394 [2024-11-20 11:26:23.362442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.394 [2024-11-20 11:26:23.364666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.394 [2024-11-20 11:26:23.364706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:40.394 pt1 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.394 malloc2 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.394 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.394 [2024-11-20 11:26:23.427568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:40.394 [2024-11-20 11:26:23.427645] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.394 [2024-11-20 11:26:23.427675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:40.394 [2024-11-20 11:26:23.427686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.394 [2024-11-20 11:26:23.430049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.394 [2024-11-20 11:26:23.430095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:40.394 pt2 00:16:40.395 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.395 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:40.395 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:40.395 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:40.395 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:40.395 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:40.395 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:40.395 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:40.395 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:40.395 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:40.395 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.395 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.395 malloc3 00:16:40.395 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.395 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:40.395 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.395 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.395 [2024-11-20 11:26:23.506499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:40.395 [2024-11-20 11:26:23.506589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.395 [2024-11-20 11:26:23.506618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:40.395 [2024-11-20 11:26:23.506628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.652 [2024-11-20 11:26:23.509142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.652 [2024-11-20 11:26:23.509198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:40.652 pt3 00:16:40.652 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.652 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:40.652 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:40.652 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:16:40.652 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.652 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.652 [2024-11-20 11:26:23.518566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:40.652 [2024-11-20 11:26:23.520725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:40.652 [2024-11-20 11:26:23.520808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:40.652 [2024-11-20 11:26:23.521021] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:40.652 [2024-11-20 11:26:23.521051] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:40.652 [2024-11-20 11:26:23.521375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:40.652 [2024-11-20 11:26:23.528805] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:40.652 [2024-11-20 11:26:23.528837] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:40.652 [2024-11-20 11:26:23.529110] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.652 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.652 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:40.652 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.652 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.652 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.652 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.652 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.652 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.652 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.652 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.652 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.652 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.652 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.652 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.652 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.652 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.652 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.652 "name": "raid_bdev1", 00:16:40.652 "uuid": "8a23eddf-8cbe-4afb-9d01-0bfb94ddf942", 00:16:40.652 "strip_size_kb": 64, 00:16:40.652 "state": "online", 00:16:40.652 "raid_level": "raid5f", 00:16:40.653 "superblock": true, 00:16:40.653 "num_base_bdevs": 3, 00:16:40.653 "num_base_bdevs_discovered": 3, 00:16:40.653 "num_base_bdevs_operational": 3, 00:16:40.653 "base_bdevs_list": [ 00:16:40.653 { 00:16:40.653 "name": "pt1", 00:16:40.653 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:40.653 "is_configured": true, 00:16:40.653 "data_offset": 2048, 00:16:40.653 "data_size": 63488 00:16:40.653 }, 00:16:40.653 { 00:16:40.653 "name": "pt2", 00:16:40.653 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:40.653 "is_configured": true, 00:16:40.653 "data_offset": 2048, 00:16:40.653 "data_size": 63488 00:16:40.653 }, 00:16:40.653 { 00:16:40.653 "name": "pt3", 00:16:40.653 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:40.653 "is_configured": true, 00:16:40.653 "data_offset": 2048, 00:16:40.653 "data_size": 63488 00:16:40.653 } 00:16:40.653 ] 00:16:40.653 }' 00:16:40.653 11:26:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.653 11:26:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.282 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:41.282 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:41.282 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:41.282 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:41.282 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:41.282 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:41.282 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:41.282 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.282 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.282 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:41.282 [2024-11-20 11:26:24.036507] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:41.282 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.282 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:41.282 "name": "raid_bdev1", 00:16:41.282 "aliases": [ 00:16:41.282 "8a23eddf-8cbe-4afb-9d01-0bfb94ddf942" 00:16:41.282 ], 00:16:41.282 "product_name": "Raid Volume", 00:16:41.282 "block_size": 512, 00:16:41.282 "num_blocks": 126976, 00:16:41.282 "uuid": "8a23eddf-8cbe-4afb-9d01-0bfb94ddf942", 00:16:41.282 "assigned_rate_limits": { 00:16:41.282 "rw_ios_per_sec": 0, 00:16:41.282 "rw_mbytes_per_sec": 0, 00:16:41.282 "r_mbytes_per_sec": 0, 00:16:41.282 "w_mbytes_per_sec": 0 00:16:41.282 }, 00:16:41.282 "claimed": false, 00:16:41.282 "zoned": false, 00:16:41.282 "supported_io_types": { 00:16:41.282 "read": true, 00:16:41.282 "write": true, 00:16:41.282 "unmap": false, 00:16:41.282 "flush": false, 00:16:41.282 "reset": true, 00:16:41.282 "nvme_admin": false, 00:16:41.282 "nvme_io": false, 00:16:41.282 "nvme_io_md": false, 00:16:41.282 "write_zeroes": true, 00:16:41.282 "zcopy": false, 00:16:41.282 "get_zone_info": false, 00:16:41.282 "zone_management": false, 00:16:41.282 "zone_append": false, 00:16:41.282 "compare": false, 00:16:41.282 "compare_and_write": false, 00:16:41.282 "abort": false, 00:16:41.282 "seek_hole": false, 00:16:41.282 "seek_data": false, 00:16:41.282 "copy": false, 00:16:41.282 "nvme_iov_md": false 00:16:41.282 }, 00:16:41.282 "driver_specific": { 00:16:41.282 "raid": { 00:16:41.282 "uuid": "8a23eddf-8cbe-4afb-9d01-0bfb94ddf942", 00:16:41.282 "strip_size_kb": 64, 00:16:41.282 "state": "online", 00:16:41.282 "raid_level": "raid5f", 00:16:41.282 "superblock": true, 00:16:41.282 "num_base_bdevs": 3, 00:16:41.282 "num_base_bdevs_discovered": 3, 00:16:41.282 "num_base_bdevs_operational": 3, 00:16:41.282 "base_bdevs_list": [ 00:16:41.282 { 00:16:41.282 "name": "pt1", 00:16:41.282 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:41.282 "is_configured": true, 00:16:41.282 "data_offset": 2048, 00:16:41.282 "data_size": 63488 00:16:41.282 }, 00:16:41.282 { 00:16:41.282 "name": "pt2", 00:16:41.282 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:41.282 "is_configured": true, 00:16:41.282 "data_offset": 2048, 00:16:41.282 "data_size": 63488 00:16:41.282 }, 00:16:41.282 { 00:16:41.282 "name": "pt3", 00:16:41.282 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:41.282 "is_configured": true, 00:16:41.282 "data_offset": 2048, 00:16:41.282 "data_size": 63488 00:16:41.282 } 00:16:41.282 ] 00:16:41.282 } 00:16:41.282 } 00:16:41.282 }' 00:16:41.282 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:41.282 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:41.282 pt2 00:16:41.282 pt3' 00:16:41.282 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:41.283 [2024-11-20 11:26:24.347947] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8a23eddf-8cbe-4afb-9d01-0bfb94ddf942 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8a23eddf-8cbe-4afb-9d01-0bfb94ddf942 ']' 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.283 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.283 [2024-11-20 11:26:24.395646] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:41.283 [2024-11-20 11:26:24.395688] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:41.283 [2024-11-20 11:26:24.395796] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.283 [2024-11-20 11:26:24.395900] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.283 [2024-11-20 11:26:24.395921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:41.540 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.541 [2024-11-20 11:26:24.551595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:41.541 [2024-11-20 11:26:24.553740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:41.541 [2024-11-20 11:26:24.553852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:41.541 [2024-11-20 11:26:24.553935] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:41.541 [2024-11-20 11:26:24.554039] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:41.541 [2024-11-20 11:26:24.554114] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:41.541 [2024-11-20 11:26:24.554176] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:41.541 [2024-11-20 11:26:24.554213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:41.541 request: 00:16:41.541 { 00:16:41.541 "name": "raid_bdev1", 00:16:41.541 "raid_level": "raid5f", 00:16:41.541 "base_bdevs": [ 00:16:41.541 "malloc1", 00:16:41.541 "malloc2", 00:16:41.541 "malloc3" 00:16:41.541 ], 00:16:41.541 "strip_size_kb": 64, 00:16:41.541 "superblock": false, 00:16:41.541 "method": "bdev_raid_create", 00:16:41.541 "req_id": 1 00:16:41.541 } 00:16:41.541 Got JSON-RPC error response 00:16:41.541 response: 00:16:41.541 { 00:16:41.541 "code": -17, 00:16:41.541 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:41.541 } 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.541 [2024-11-20 11:26:24.615412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:41.541 [2024-11-20 11:26:24.615565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.541 [2024-11-20 11:26:24.615610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:41.541 [2024-11-20 11:26:24.615673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.541 [2024-11-20 11:26:24.618089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.541 [2024-11-20 11:26:24.618168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:41.541 [2024-11-20 11:26:24.618271] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:41.541 [2024-11-20 11:26:24.618333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:41.541 pt1 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.541 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.798 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.798 "name": "raid_bdev1", 00:16:41.798 "uuid": "8a23eddf-8cbe-4afb-9d01-0bfb94ddf942", 00:16:41.798 "strip_size_kb": 64, 00:16:41.799 "state": "configuring", 00:16:41.799 "raid_level": "raid5f", 00:16:41.799 "superblock": true, 00:16:41.799 "num_base_bdevs": 3, 00:16:41.799 "num_base_bdevs_discovered": 1, 00:16:41.799 "num_base_bdevs_operational": 3, 00:16:41.799 "base_bdevs_list": [ 00:16:41.799 { 00:16:41.799 "name": "pt1", 00:16:41.799 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:41.799 "is_configured": true, 00:16:41.799 "data_offset": 2048, 00:16:41.799 "data_size": 63488 00:16:41.799 }, 00:16:41.799 { 00:16:41.799 "name": null, 00:16:41.799 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:41.799 "is_configured": false, 00:16:41.799 "data_offset": 2048, 00:16:41.799 "data_size": 63488 00:16:41.799 }, 00:16:41.799 { 00:16:41.799 "name": null, 00:16:41.799 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:41.799 "is_configured": false, 00:16:41.799 "data_offset": 2048, 00:16:41.799 "data_size": 63488 00:16:41.799 } 00:16:41.799 ] 00:16:41.799 }' 00:16:41.799 11:26:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.799 11:26:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.056 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:16:42.056 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:42.056 11:26:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.056 11:26:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.056 [2024-11-20 11:26:25.102585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:42.056 [2024-11-20 11:26:25.102736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.056 [2024-11-20 11:26:25.102781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:42.056 [2024-11-20 11:26:25.102814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.056 [2024-11-20 11:26:25.103356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.056 [2024-11-20 11:26:25.103427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:42.056 [2024-11-20 11:26:25.103593] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:42.056 [2024-11-20 11:26:25.103658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:42.056 pt2 00:16:42.056 11:26:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.056 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:42.056 11:26:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.056 11:26:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.056 [2024-11-20 11:26:25.114596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:42.056 11:26:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.056 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:42.056 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.056 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.056 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.056 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.056 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.056 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.056 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.056 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.056 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.056 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.056 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.056 11:26:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.056 11:26:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.056 11:26:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.313 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.313 "name": "raid_bdev1", 00:16:42.313 "uuid": "8a23eddf-8cbe-4afb-9d01-0bfb94ddf942", 00:16:42.313 "strip_size_kb": 64, 00:16:42.313 "state": "configuring", 00:16:42.313 "raid_level": "raid5f", 00:16:42.313 "superblock": true, 00:16:42.313 "num_base_bdevs": 3, 00:16:42.313 "num_base_bdevs_discovered": 1, 00:16:42.313 "num_base_bdevs_operational": 3, 00:16:42.313 "base_bdevs_list": [ 00:16:42.313 { 00:16:42.313 "name": "pt1", 00:16:42.313 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:42.313 "is_configured": true, 00:16:42.313 "data_offset": 2048, 00:16:42.313 "data_size": 63488 00:16:42.313 }, 00:16:42.313 { 00:16:42.313 "name": null, 00:16:42.313 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:42.313 "is_configured": false, 00:16:42.313 "data_offset": 0, 00:16:42.313 "data_size": 63488 00:16:42.313 }, 00:16:42.313 { 00:16:42.313 "name": null, 00:16:42.313 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:42.313 "is_configured": false, 00:16:42.313 "data_offset": 2048, 00:16:42.313 "data_size": 63488 00:16:42.313 } 00:16:42.313 ] 00:16:42.313 }' 00:16:42.313 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.313 11:26:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.571 [2024-11-20 11:26:25.617686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:42.571 [2024-11-20 11:26:25.617774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.571 [2024-11-20 11:26:25.617796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:42.571 [2024-11-20 11:26:25.617809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.571 [2024-11-20 11:26:25.618336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.571 [2024-11-20 11:26:25.618366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:42.571 [2024-11-20 11:26:25.618479] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:42.571 [2024-11-20 11:26:25.618516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:42.571 pt2 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.571 [2024-11-20 11:26:25.629663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:42.571 [2024-11-20 11:26:25.629749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.571 [2024-11-20 11:26:25.629768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:42.571 [2024-11-20 11:26:25.629780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.571 [2024-11-20 11:26:25.630268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.571 [2024-11-20 11:26:25.630305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:42.571 [2024-11-20 11:26:25.630395] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:42.571 [2024-11-20 11:26:25.630422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:42.571 [2024-11-20 11:26:25.630602] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:42.571 [2024-11-20 11:26:25.630621] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:42.571 [2024-11-20 11:26:25.630906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:42.571 [2024-11-20 11:26:25.637305] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:42.571 [2024-11-20 11:26:25.637330] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:42.571 [2024-11-20 11:26:25.637586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.571 pt3 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.571 11:26:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.829 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.829 "name": "raid_bdev1", 00:16:42.829 "uuid": "8a23eddf-8cbe-4afb-9d01-0bfb94ddf942", 00:16:42.829 "strip_size_kb": 64, 00:16:42.829 "state": "online", 00:16:42.829 "raid_level": "raid5f", 00:16:42.829 "superblock": true, 00:16:42.829 "num_base_bdevs": 3, 00:16:42.829 "num_base_bdevs_discovered": 3, 00:16:42.829 "num_base_bdevs_operational": 3, 00:16:42.829 "base_bdevs_list": [ 00:16:42.829 { 00:16:42.829 "name": "pt1", 00:16:42.829 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:42.829 "is_configured": true, 00:16:42.829 "data_offset": 2048, 00:16:42.829 "data_size": 63488 00:16:42.829 }, 00:16:42.829 { 00:16:42.829 "name": "pt2", 00:16:42.829 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:42.829 "is_configured": true, 00:16:42.829 "data_offset": 2048, 00:16:42.829 "data_size": 63488 00:16:42.829 }, 00:16:42.829 { 00:16:42.829 "name": "pt3", 00:16:42.829 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:42.829 "is_configured": true, 00:16:42.829 "data_offset": 2048, 00:16:42.829 "data_size": 63488 00:16:42.829 } 00:16:42.829 ] 00:16:42.829 }' 00:16:42.829 11:26:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.829 11:26:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.088 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:43.088 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:43.088 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:43.088 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:43.088 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:43.088 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:43.088 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:43.088 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:43.088 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.088 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.088 [2024-11-20 11:26:26.108649] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:43.088 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.088 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:43.089 "name": "raid_bdev1", 00:16:43.089 "aliases": [ 00:16:43.089 "8a23eddf-8cbe-4afb-9d01-0bfb94ddf942" 00:16:43.089 ], 00:16:43.089 "product_name": "Raid Volume", 00:16:43.089 "block_size": 512, 00:16:43.089 "num_blocks": 126976, 00:16:43.089 "uuid": "8a23eddf-8cbe-4afb-9d01-0bfb94ddf942", 00:16:43.089 "assigned_rate_limits": { 00:16:43.089 "rw_ios_per_sec": 0, 00:16:43.089 "rw_mbytes_per_sec": 0, 00:16:43.089 "r_mbytes_per_sec": 0, 00:16:43.089 "w_mbytes_per_sec": 0 00:16:43.089 }, 00:16:43.089 "claimed": false, 00:16:43.089 "zoned": false, 00:16:43.089 "supported_io_types": { 00:16:43.089 "read": true, 00:16:43.089 "write": true, 00:16:43.089 "unmap": false, 00:16:43.089 "flush": false, 00:16:43.089 "reset": true, 00:16:43.089 "nvme_admin": false, 00:16:43.089 "nvme_io": false, 00:16:43.089 "nvme_io_md": false, 00:16:43.089 "write_zeroes": true, 00:16:43.089 "zcopy": false, 00:16:43.089 "get_zone_info": false, 00:16:43.089 "zone_management": false, 00:16:43.089 "zone_append": false, 00:16:43.089 "compare": false, 00:16:43.089 "compare_and_write": false, 00:16:43.089 "abort": false, 00:16:43.089 "seek_hole": false, 00:16:43.089 "seek_data": false, 00:16:43.089 "copy": false, 00:16:43.089 "nvme_iov_md": false 00:16:43.089 }, 00:16:43.089 "driver_specific": { 00:16:43.089 "raid": { 00:16:43.089 "uuid": "8a23eddf-8cbe-4afb-9d01-0bfb94ddf942", 00:16:43.089 "strip_size_kb": 64, 00:16:43.089 "state": "online", 00:16:43.089 "raid_level": "raid5f", 00:16:43.089 "superblock": true, 00:16:43.089 "num_base_bdevs": 3, 00:16:43.089 "num_base_bdevs_discovered": 3, 00:16:43.089 "num_base_bdevs_operational": 3, 00:16:43.089 "base_bdevs_list": [ 00:16:43.089 { 00:16:43.089 "name": "pt1", 00:16:43.089 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:43.089 "is_configured": true, 00:16:43.089 "data_offset": 2048, 00:16:43.089 "data_size": 63488 00:16:43.089 }, 00:16:43.089 { 00:16:43.089 "name": "pt2", 00:16:43.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:43.089 "is_configured": true, 00:16:43.089 "data_offset": 2048, 00:16:43.089 "data_size": 63488 00:16:43.089 }, 00:16:43.089 { 00:16:43.089 "name": "pt3", 00:16:43.089 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:43.089 "is_configured": true, 00:16:43.089 "data_offset": 2048, 00:16:43.089 "data_size": 63488 00:16:43.089 } 00:16:43.089 ] 00:16:43.089 } 00:16:43.089 } 00:16:43.089 }' 00:16:43.089 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:43.089 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:43.089 pt2 00:16:43.089 pt3' 00:16:43.089 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.347 [2024-11-20 11:26:26.404098] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8a23eddf-8cbe-4afb-9d01-0bfb94ddf942 '!=' 8a23eddf-8cbe-4afb-9d01-0bfb94ddf942 ']' 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.347 [2024-11-20 11:26:26.451926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.347 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.605 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.605 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.605 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.605 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.605 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.605 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.605 "name": "raid_bdev1", 00:16:43.605 "uuid": "8a23eddf-8cbe-4afb-9d01-0bfb94ddf942", 00:16:43.605 "strip_size_kb": 64, 00:16:43.605 "state": "online", 00:16:43.605 "raid_level": "raid5f", 00:16:43.605 "superblock": true, 00:16:43.605 "num_base_bdevs": 3, 00:16:43.605 "num_base_bdevs_discovered": 2, 00:16:43.605 "num_base_bdevs_operational": 2, 00:16:43.605 "base_bdevs_list": [ 00:16:43.605 { 00:16:43.605 "name": null, 00:16:43.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.605 "is_configured": false, 00:16:43.605 "data_offset": 0, 00:16:43.605 "data_size": 63488 00:16:43.605 }, 00:16:43.605 { 00:16:43.605 "name": "pt2", 00:16:43.605 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:43.605 "is_configured": true, 00:16:43.605 "data_offset": 2048, 00:16:43.605 "data_size": 63488 00:16:43.605 }, 00:16:43.605 { 00:16:43.605 "name": "pt3", 00:16:43.605 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:43.605 "is_configured": true, 00:16:43.605 "data_offset": 2048, 00:16:43.605 "data_size": 63488 00:16:43.605 } 00:16:43.605 ] 00:16:43.605 }' 00:16:43.605 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.605 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.862 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:43.862 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.862 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.862 [2024-11-20 11:26:26.962979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:43.862 [2024-11-20 11:26:26.963087] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:43.862 [2024-11-20 11:26:26.963201] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.862 [2024-11-20 11:26:26.963294] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.862 [2024-11-20 11:26:26.963350] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:43.862 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.862 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.862 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.862 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.862 11:26:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:44.121 11:26:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.121 [2024-11-20 11:26:27.050811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:44.121 [2024-11-20 11:26:27.050881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.121 [2024-11-20 11:26:27.050901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:44.121 [2024-11-20 11:26:27.050913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.121 [2024-11-20 11:26:27.053542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.121 [2024-11-20 11:26:27.053629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:44.121 [2024-11-20 11:26:27.053760] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:44.121 [2024-11-20 11:26:27.053863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:44.121 pt2 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.121 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.121 "name": "raid_bdev1", 00:16:44.121 "uuid": "8a23eddf-8cbe-4afb-9d01-0bfb94ddf942", 00:16:44.121 "strip_size_kb": 64, 00:16:44.121 "state": "configuring", 00:16:44.121 "raid_level": "raid5f", 00:16:44.121 "superblock": true, 00:16:44.121 "num_base_bdevs": 3, 00:16:44.121 "num_base_bdevs_discovered": 1, 00:16:44.121 "num_base_bdevs_operational": 2, 00:16:44.121 "base_bdevs_list": [ 00:16:44.122 { 00:16:44.122 "name": null, 00:16:44.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.122 "is_configured": false, 00:16:44.122 "data_offset": 2048, 00:16:44.122 "data_size": 63488 00:16:44.122 }, 00:16:44.122 { 00:16:44.122 "name": "pt2", 00:16:44.122 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:44.122 "is_configured": true, 00:16:44.122 "data_offset": 2048, 00:16:44.122 "data_size": 63488 00:16:44.122 }, 00:16:44.122 { 00:16:44.122 "name": null, 00:16:44.122 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:44.122 "is_configured": false, 00:16:44.122 "data_offset": 2048, 00:16:44.122 "data_size": 63488 00:16:44.122 } 00:16:44.122 ] 00:16:44.122 }' 00:16:44.122 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.122 11:26:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.689 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:44.689 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:44.689 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:16:44.689 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:44.689 11:26:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.689 11:26:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.689 [2024-11-20 11:26:27.569927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:44.689 [2024-11-20 11:26:27.570005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.689 [2024-11-20 11:26:27.570033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:44.689 [2024-11-20 11:26:27.570046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.689 [2024-11-20 11:26:27.570579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.689 [2024-11-20 11:26:27.570606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:44.689 [2024-11-20 11:26:27.570701] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:44.689 [2024-11-20 11:26:27.570738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:44.689 [2024-11-20 11:26:27.570878] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:44.689 [2024-11-20 11:26:27.570891] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:44.689 [2024-11-20 11:26:27.571175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:44.689 [2024-11-20 11:26:27.577547] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:44.689 [2024-11-20 11:26:27.577616] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:44.689 [2024-11-20 11:26:27.578043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.689 pt3 00:16:44.689 11:26:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.689 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:44.689 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.689 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.689 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.689 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.689 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:44.689 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.689 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.689 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.689 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.689 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.689 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.689 11:26:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.689 11:26:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.689 11:26:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.689 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.689 "name": "raid_bdev1", 00:16:44.689 "uuid": "8a23eddf-8cbe-4afb-9d01-0bfb94ddf942", 00:16:44.689 "strip_size_kb": 64, 00:16:44.689 "state": "online", 00:16:44.689 "raid_level": "raid5f", 00:16:44.689 "superblock": true, 00:16:44.689 "num_base_bdevs": 3, 00:16:44.689 "num_base_bdevs_discovered": 2, 00:16:44.689 "num_base_bdevs_operational": 2, 00:16:44.689 "base_bdevs_list": [ 00:16:44.689 { 00:16:44.689 "name": null, 00:16:44.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.689 "is_configured": false, 00:16:44.689 "data_offset": 2048, 00:16:44.689 "data_size": 63488 00:16:44.689 }, 00:16:44.689 { 00:16:44.689 "name": "pt2", 00:16:44.689 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:44.689 "is_configured": true, 00:16:44.689 "data_offset": 2048, 00:16:44.689 "data_size": 63488 00:16:44.689 }, 00:16:44.689 { 00:16:44.689 "name": "pt3", 00:16:44.689 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:44.689 "is_configured": true, 00:16:44.689 "data_offset": 2048, 00:16:44.689 "data_size": 63488 00:16:44.689 } 00:16:44.689 ] 00:16:44.689 }' 00:16:44.689 11:26:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.689 11:26:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.947 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:44.947 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.947 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.947 [2024-11-20 11:26:28.010296] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:44.947 [2024-11-20 11:26:28.010408] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:44.947 [2024-11-20 11:26:28.010525] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:44.947 [2024-11-20 11:26:28.010606] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:44.947 [2024-11-20 11:26:28.010620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:44.947 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.947 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:44.947 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.947 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.947 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.947 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.205 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:45.205 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:45.205 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:16:45.205 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:16:45.205 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.206 [2024-11-20 11:26:28.086172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:45.206 [2024-11-20 11:26:28.086297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.206 [2024-11-20 11:26:28.086325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:45.206 [2024-11-20 11:26:28.086337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.206 [2024-11-20 11:26:28.089095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.206 [2024-11-20 11:26:28.089140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:45.206 [2024-11-20 11:26:28.089236] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:45.206 [2024-11-20 11:26:28.089285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:45.206 [2024-11-20 11:26:28.089433] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:45.206 [2024-11-20 11:26:28.089446] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:45.206 [2024-11-20 11:26:28.089503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:45.206 [2024-11-20 11:26:28.089581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:45.206 pt1 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.206 "name": "raid_bdev1", 00:16:45.206 "uuid": "8a23eddf-8cbe-4afb-9d01-0bfb94ddf942", 00:16:45.206 "strip_size_kb": 64, 00:16:45.206 "state": "configuring", 00:16:45.206 "raid_level": "raid5f", 00:16:45.206 "superblock": true, 00:16:45.206 "num_base_bdevs": 3, 00:16:45.206 "num_base_bdevs_discovered": 1, 00:16:45.206 "num_base_bdevs_operational": 2, 00:16:45.206 "base_bdevs_list": [ 00:16:45.206 { 00:16:45.206 "name": null, 00:16:45.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.206 "is_configured": false, 00:16:45.206 "data_offset": 2048, 00:16:45.206 "data_size": 63488 00:16:45.206 }, 00:16:45.206 { 00:16:45.206 "name": "pt2", 00:16:45.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:45.206 "is_configured": true, 00:16:45.206 "data_offset": 2048, 00:16:45.206 "data_size": 63488 00:16:45.206 }, 00:16:45.206 { 00:16:45.206 "name": null, 00:16:45.206 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:45.206 "is_configured": false, 00:16:45.206 "data_offset": 2048, 00:16:45.206 "data_size": 63488 00:16:45.206 } 00:16:45.206 ] 00:16:45.206 }' 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.206 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.465 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:45.465 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:45.465 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.465 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.465 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.723 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:45.723 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:45.723 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.723 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.723 [2024-11-20 11:26:28.613300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:45.723 [2024-11-20 11:26:28.613440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.723 [2024-11-20 11:26:28.613507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:45.723 [2024-11-20 11:26:28.613556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.723 [2024-11-20 11:26:28.614173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.723 [2024-11-20 11:26:28.614253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:45.723 [2024-11-20 11:26:28.614399] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:45.723 [2024-11-20 11:26:28.614478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:45.723 [2024-11-20 11:26:28.614674] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:45.723 [2024-11-20 11:26:28.614724] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:45.723 [2024-11-20 11:26:28.615067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:45.723 [2024-11-20 11:26:28.622838] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:45.723 [2024-11-20 11:26:28.622928] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:45.723 [2024-11-20 11:26:28.623311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.723 pt3 00:16:45.723 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.723 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:45.723 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.724 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.724 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.724 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.724 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:45.724 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.724 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.724 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.724 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.724 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.724 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.724 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.724 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.724 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.724 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.724 "name": "raid_bdev1", 00:16:45.724 "uuid": "8a23eddf-8cbe-4afb-9d01-0bfb94ddf942", 00:16:45.724 "strip_size_kb": 64, 00:16:45.724 "state": "online", 00:16:45.724 "raid_level": "raid5f", 00:16:45.724 "superblock": true, 00:16:45.724 "num_base_bdevs": 3, 00:16:45.724 "num_base_bdevs_discovered": 2, 00:16:45.724 "num_base_bdevs_operational": 2, 00:16:45.724 "base_bdevs_list": [ 00:16:45.724 { 00:16:45.724 "name": null, 00:16:45.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.724 "is_configured": false, 00:16:45.724 "data_offset": 2048, 00:16:45.724 "data_size": 63488 00:16:45.724 }, 00:16:45.724 { 00:16:45.724 "name": "pt2", 00:16:45.724 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:45.724 "is_configured": true, 00:16:45.724 "data_offset": 2048, 00:16:45.724 "data_size": 63488 00:16:45.724 }, 00:16:45.724 { 00:16:45.724 "name": "pt3", 00:16:45.724 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:45.724 "is_configured": true, 00:16:45.724 "data_offset": 2048, 00:16:45.724 "data_size": 63488 00:16:45.724 } 00:16:45.724 ] 00:16:45.724 }' 00:16:45.724 11:26:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.724 11:26:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.292 11:26:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:46.292 11:26:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:46.292 11:26:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.292 11:26:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.292 11:26:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.292 11:26:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:46.292 11:26:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:46.292 11:26:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:46.292 11:26:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.292 11:26:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.292 [2024-11-20 11:26:29.191740] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:46.292 11:26:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.292 11:26:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 8a23eddf-8cbe-4afb-9d01-0bfb94ddf942 '!=' 8a23eddf-8cbe-4afb-9d01-0bfb94ddf942 ']' 00:16:46.292 11:26:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81311 00:16:46.292 11:26:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81311 ']' 00:16:46.293 11:26:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81311 00:16:46.293 11:26:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:46.293 11:26:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.293 11:26:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81311 00:16:46.293 killing process with pid 81311 00:16:46.293 11:26:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:46.293 11:26:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:46.293 11:26:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81311' 00:16:46.293 11:26:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81311 00:16:46.293 [2024-11-20 11:26:29.265256] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:46.293 [2024-11-20 11:26:29.265376] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.293 11:26:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81311 00:16:46.293 [2024-11-20 11:26:29.265472] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:46.293 [2024-11-20 11:26:29.265489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:46.550 [2024-11-20 11:26:29.624839] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:47.975 11:26:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:47.975 00:16:47.975 real 0m8.554s 00:16:47.975 user 0m13.363s 00:16:47.975 sys 0m1.499s 00:16:47.975 ************************************ 00:16:47.975 END TEST raid5f_superblock_test 00:16:47.975 ************************************ 00:16:47.975 11:26:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:47.975 11:26:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.975 11:26:30 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:47.975 11:26:30 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:16:47.975 11:26:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:47.975 11:26:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.975 11:26:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:47.975 ************************************ 00:16:47.975 START TEST raid5f_rebuild_test 00:16:47.975 ************************************ 00:16:47.975 11:26:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:16:47.975 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:47.975 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:47.975 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:47.975 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:47.975 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:47.975 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81760 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81760 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81760 ']' 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:47.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.976 11:26:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.976 [2024-11-20 11:26:31.046841] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:16:47.976 [2024-11-20 11:26:31.047077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81760 ] 00:16:47.976 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:47.976 Zero copy mechanism will not be used. 00:16:48.234 [2024-11-20 11:26:31.214335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.234 [2024-11-20 11:26:31.347601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.493 [2024-11-20 11:26:31.592769] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:48.493 [2024-11-20 11:26:31.592859] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.061 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:49.061 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:49.061 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:49.061 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:49.061 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.061 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.061 BaseBdev1_malloc 00:16:49.061 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.061 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:49.061 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.061 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.061 [2024-11-20 11:26:32.069436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:49.061 [2024-11-20 11:26:32.069554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.061 [2024-11-20 11:26:32.069583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:49.061 [2024-11-20 11:26:32.069600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.061 [2024-11-20 11:26:32.072269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.061 [2024-11-20 11:26:32.072385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:49.061 BaseBdev1 00:16:49.061 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.061 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:49.061 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:49.061 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.061 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.061 BaseBdev2_malloc 00:16:49.061 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.061 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:49.061 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.061 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.061 [2024-11-20 11:26:32.134594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:49.061 [2024-11-20 11:26:32.134684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.061 [2024-11-20 11:26:32.134711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:49.061 [2024-11-20 11:26:32.134731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.061 [2024-11-20 11:26:32.137294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.061 [2024-11-20 11:26:32.137354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:49.061 BaseBdev2 00:16:49.061 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.061 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:49.061 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:49.061 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.061 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.321 BaseBdev3_malloc 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.321 [2024-11-20 11:26:32.207348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:49.321 [2024-11-20 11:26:32.207430] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.321 [2024-11-20 11:26:32.207489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:49.321 [2024-11-20 11:26:32.207511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.321 [2024-11-20 11:26:32.210000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.321 [2024-11-20 11:26:32.210055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:49.321 BaseBdev3 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.321 spare_malloc 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.321 spare_delay 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.321 [2024-11-20 11:26:32.276923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:49.321 [2024-11-20 11:26:32.277005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.321 [2024-11-20 11:26:32.277033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:49.321 [2024-11-20 11:26:32.277049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.321 [2024-11-20 11:26:32.279522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.321 [2024-11-20 11:26:32.279575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:49.321 spare 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.321 [2024-11-20 11:26:32.288963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:49.321 [2024-11-20 11:26:32.290913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:49.321 [2024-11-20 11:26:32.291065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:49.321 [2024-11-20 11:26:32.291183] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:49.321 [2024-11-20 11:26:32.291198] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:49.321 [2024-11-20 11:26:32.291571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:49.321 [2024-11-20 11:26:32.297986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:49.321 [2024-11-20 11:26:32.298062] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:49.321 [2024-11-20 11:26:32.298307] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.321 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.321 "name": "raid_bdev1", 00:16:49.321 "uuid": "9d72dcf8-01a1-4959-8ecf-96f2eb40fd48", 00:16:49.321 "strip_size_kb": 64, 00:16:49.321 "state": "online", 00:16:49.321 "raid_level": "raid5f", 00:16:49.321 "superblock": false, 00:16:49.321 "num_base_bdevs": 3, 00:16:49.321 "num_base_bdevs_discovered": 3, 00:16:49.321 "num_base_bdevs_operational": 3, 00:16:49.321 "base_bdevs_list": [ 00:16:49.321 { 00:16:49.321 "name": "BaseBdev1", 00:16:49.321 "uuid": "6da7a05e-db75-5450-9988-6c935c297e76", 00:16:49.321 "is_configured": true, 00:16:49.321 "data_offset": 0, 00:16:49.321 "data_size": 65536 00:16:49.322 }, 00:16:49.322 { 00:16:49.322 "name": "BaseBdev2", 00:16:49.322 "uuid": "926ccad7-a49d-541b-b71d-f2722608e70a", 00:16:49.322 "is_configured": true, 00:16:49.322 "data_offset": 0, 00:16:49.322 "data_size": 65536 00:16:49.322 }, 00:16:49.322 { 00:16:49.322 "name": "BaseBdev3", 00:16:49.322 "uuid": "6ca1591d-c9f9-5c39-a00b-87e48fa621ac", 00:16:49.322 "is_configured": true, 00:16:49.322 "data_offset": 0, 00:16:49.322 "data_size": 65536 00:16:49.322 } 00:16:49.322 ] 00:16:49.322 }' 00:16:49.322 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.322 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.890 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:49.890 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:49.890 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.890 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.890 [2024-11-20 11:26:32.760889] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:49.890 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.890 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:16:49.890 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:49.890 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.890 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.890 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.890 11:26:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.890 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:49.890 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:49.890 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:49.890 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:49.890 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:49.890 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:49.890 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:49.890 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:49.890 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:49.890 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:49.890 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:49.890 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:49.890 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:49.890 11:26:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:50.148 [2024-11-20 11:26:33.076162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:50.148 /dev/nbd0 00:16:50.148 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:50.148 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:50.148 11:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:50.148 11:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:50.148 11:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:50.148 11:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:50.148 11:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:50.148 11:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:50.148 11:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:50.148 11:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:50.148 11:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:50.148 1+0 records in 00:16:50.148 1+0 records out 00:16:50.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309406 s, 13.2 MB/s 00:16:50.148 11:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:50.148 11:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:50.148 11:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:50.148 11:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:50.148 11:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:50.149 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:50.149 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:50.149 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:50.149 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:50.149 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:50.149 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:16:50.748 512+0 records in 00:16:50.748 512+0 records out 00:16:50.748 67108864 bytes (67 MB, 64 MiB) copied, 0.455671 s, 147 MB/s 00:16:50.748 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:50.748 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:50.748 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:50.748 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:50.748 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:50.748 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:50.748 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:50.748 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:50.748 [2024-11-20 11:26:33.862866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.007 [2024-11-20 11:26:33.879912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.007 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.007 "name": "raid_bdev1", 00:16:51.007 "uuid": "9d72dcf8-01a1-4959-8ecf-96f2eb40fd48", 00:16:51.007 "strip_size_kb": 64, 00:16:51.007 "state": "online", 00:16:51.007 "raid_level": "raid5f", 00:16:51.007 "superblock": false, 00:16:51.007 "num_base_bdevs": 3, 00:16:51.007 "num_base_bdevs_discovered": 2, 00:16:51.007 "num_base_bdevs_operational": 2, 00:16:51.007 "base_bdevs_list": [ 00:16:51.007 { 00:16:51.007 "name": null, 00:16:51.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.007 "is_configured": false, 00:16:51.007 "data_offset": 0, 00:16:51.007 "data_size": 65536 00:16:51.007 }, 00:16:51.007 { 00:16:51.007 "name": "BaseBdev2", 00:16:51.007 "uuid": "926ccad7-a49d-541b-b71d-f2722608e70a", 00:16:51.007 "is_configured": true, 00:16:51.007 "data_offset": 0, 00:16:51.007 "data_size": 65536 00:16:51.007 }, 00:16:51.007 { 00:16:51.007 "name": "BaseBdev3", 00:16:51.007 "uuid": "6ca1591d-c9f9-5c39-a00b-87e48fa621ac", 00:16:51.008 "is_configured": true, 00:16:51.008 "data_offset": 0, 00:16:51.008 "data_size": 65536 00:16:51.008 } 00:16:51.008 ] 00:16:51.008 }' 00:16:51.008 11:26:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.008 11:26:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.265 11:26:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:51.265 11:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.265 11:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.265 [2024-11-20 11:26:34.359681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:51.524 [2024-11-20 11:26:34.381911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:16:51.524 11:26:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.524 11:26:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:51.524 [2024-11-20 11:26:34.393096] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:52.460 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.460 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.460 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.460 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.460 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.460 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.460 11:26:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.460 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.460 11:26:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.460 11:26:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.460 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.460 "name": "raid_bdev1", 00:16:52.460 "uuid": "9d72dcf8-01a1-4959-8ecf-96f2eb40fd48", 00:16:52.460 "strip_size_kb": 64, 00:16:52.460 "state": "online", 00:16:52.460 "raid_level": "raid5f", 00:16:52.460 "superblock": false, 00:16:52.460 "num_base_bdevs": 3, 00:16:52.460 "num_base_bdevs_discovered": 3, 00:16:52.460 "num_base_bdevs_operational": 3, 00:16:52.460 "process": { 00:16:52.460 "type": "rebuild", 00:16:52.460 "target": "spare", 00:16:52.460 "progress": { 00:16:52.460 "blocks": 18432, 00:16:52.460 "percent": 14 00:16:52.460 } 00:16:52.460 }, 00:16:52.460 "base_bdevs_list": [ 00:16:52.460 { 00:16:52.460 "name": "spare", 00:16:52.460 "uuid": "05187db2-55a8-554b-82dd-47d0e71bb364", 00:16:52.460 "is_configured": true, 00:16:52.460 "data_offset": 0, 00:16:52.460 "data_size": 65536 00:16:52.460 }, 00:16:52.460 { 00:16:52.460 "name": "BaseBdev2", 00:16:52.460 "uuid": "926ccad7-a49d-541b-b71d-f2722608e70a", 00:16:52.460 "is_configured": true, 00:16:52.460 "data_offset": 0, 00:16:52.460 "data_size": 65536 00:16:52.460 }, 00:16:52.460 { 00:16:52.460 "name": "BaseBdev3", 00:16:52.460 "uuid": "6ca1591d-c9f9-5c39-a00b-87e48fa621ac", 00:16:52.460 "is_configured": true, 00:16:52.460 "data_offset": 0, 00:16:52.460 "data_size": 65536 00:16:52.460 } 00:16:52.460 ] 00:16:52.460 }' 00:16:52.460 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.460 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.460 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.460 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.460 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:52.460 11:26:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.460 11:26:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.460 [2024-11-20 11:26:35.526252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:52.720 [2024-11-20 11:26:35.606410] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:52.720 [2024-11-20 11:26:35.606544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.721 [2024-11-20 11:26:35.606576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:52.721 [2024-11-20 11:26:35.606589] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:52.721 11:26:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.721 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:52.721 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.721 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.721 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.721 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.721 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:52.721 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.721 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.721 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.721 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.721 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.721 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.721 11:26:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.721 11:26:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.721 11:26:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.721 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.721 "name": "raid_bdev1", 00:16:52.721 "uuid": "9d72dcf8-01a1-4959-8ecf-96f2eb40fd48", 00:16:52.721 "strip_size_kb": 64, 00:16:52.721 "state": "online", 00:16:52.721 "raid_level": "raid5f", 00:16:52.721 "superblock": false, 00:16:52.721 "num_base_bdevs": 3, 00:16:52.721 "num_base_bdevs_discovered": 2, 00:16:52.721 "num_base_bdevs_operational": 2, 00:16:52.721 "base_bdevs_list": [ 00:16:52.721 { 00:16:52.721 "name": null, 00:16:52.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.721 "is_configured": false, 00:16:52.721 "data_offset": 0, 00:16:52.721 "data_size": 65536 00:16:52.721 }, 00:16:52.721 { 00:16:52.721 "name": "BaseBdev2", 00:16:52.721 "uuid": "926ccad7-a49d-541b-b71d-f2722608e70a", 00:16:52.721 "is_configured": true, 00:16:52.721 "data_offset": 0, 00:16:52.721 "data_size": 65536 00:16:52.721 }, 00:16:52.721 { 00:16:52.721 "name": "BaseBdev3", 00:16:52.721 "uuid": "6ca1591d-c9f9-5c39-a00b-87e48fa621ac", 00:16:52.721 "is_configured": true, 00:16:52.721 "data_offset": 0, 00:16:52.721 "data_size": 65536 00:16:52.721 } 00:16:52.721 ] 00:16:52.721 }' 00:16:52.721 11:26:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.721 11:26:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.987 11:26:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:52.987 11:26:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.987 11:26:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:52.987 11:26:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:52.987 11:26:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.987 11:26:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.987 11:26:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.987 11:26:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.987 11:26:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.257 11:26:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.257 11:26:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.257 "name": "raid_bdev1", 00:16:53.257 "uuid": "9d72dcf8-01a1-4959-8ecf-96f2eb40fd48", 00:16:53.257 "strip_size_kb": 64, 00:16:53.257 "state": "online", 00:16:53.257 "raid_level": "raid5f", 00:16:53.257 "superblock": false, 00:16:53.257 "num_base_bdevs": 3, 00:16:53.257 "num_base_bdevs_discovered": 2, 00:16:53.257 "num_base_bdevs_operational": 2, 00:16:53.257 "base_bdevs_list": [ 00:16:53.257 { 00:16:53.257 "name": null, 00:16:53.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.257 "is_configured": false, 00:16:53.257 "data_offset": 0, 00:16:53.257 "data_size": 65536 00:16:53.257 }, 00:16:53.257 { 00:16:53.257 "name": "BaseBdev2", 00:16:53.257 "uuid": "926ccad7-a49d-541b-b71d-f2722608e70a", 00:16:53.257 "is_configured": true, 00:16:53.257 "data_offset": 0, 00:16:53.257 "data_size": 65536 00:16:53.257 }, 00:16:53.257 { 00:16:53.257 "name": "BaseBdev3", 00:16:53.257 "uuid": "6ca1591d-c9f9-5c39-a00b-87e48fa621ac", 00:16:53.257 "is_configured": true, 00:16:53.257 "data_offset": 0, 00:16:53.257 "data_size": 65536 00:16:53.257 } 00:16:53.257 ] 00:16:53.257 }' 00:16:53.257 11:26:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.257 11:26:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:53.257 11:26:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.257 11:26:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:53.257 11:26:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:53.257 11:26:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.257 11:26:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.257 [2024-11-20 11:26:36.222153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:53.257 [2024-11-20 11:26:36.242966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:53.257 11:26:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.257 11:26:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:53.257 [2024-11-20 11:26:36.253156] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:54.194 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.194 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.194 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.194 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.194 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.194 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.194 11:26:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.194 11:26:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.194 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.194 11:26:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.194 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.194 "name": "raid_bdev1", 00:16:54.194 "uuid": "9d72dcf8-01a1-4959-8ecf-96f2eb40fd48", 00:16:54.194 "strip_size_kb": 64, 00:16:54.194 "state": "online", 00:16:54.194 "raid_level": "raid5f", 00:16:54.194 "superblock": false, 00:16:54.194 "num_base_bdevs": 3, 00:16:54.194 "num_base_bdevs_discovered": 3, 00:16:54.194 "num_base_bdevs_operational": 3, 00:16:54.194 "process": { 00:16:54.194 "type": "rebuild", 00:16:54.194 "target": "spare", 00:16:54.194 "progress": { 00:16:54.194 "blocks": 18432, 00:16:54.194 "percent": 14 00:16:54.194 } 00:16:54.194 }, 00:16:54.194 "base_bdevs_list": [ 00:16:54.194 { 00:16:54.194 "name": "spare", 00:16:54.194 "uuid": "05187db2-55a8-554b-82dd-47d0e71bb364", 00:16:54.194 "is_configured": true, 00:16:54.194 "data_offset": 0, 00:16:54.194 "data_size": 65536 00:16:54.194 }, 00:16:54.194 { 00:16:54.194 "name": "BaseBdev2", 00:16:54.194 "uuid": "926ccad7-a49d-541b-b71d-f2722608e70a", 00:16:54.194 "is_configured": true, 00:16:54.194 "data_offset": 0, 00:16:54.194 "data_size": 65536 00:16:54.194 }, 00:16:54.194 { 00:16:54.194 "name": "BaseBdev3", 00:16:54.194 "uuid": "6ca1591d-c9f9-5c39-a00b-87e48fa621ac", 00:16:54.194 "is_configured": true, 00:16:54.194 "data_offset": 0, 00:16:54.194 "data_size": 65536 00:16:54.194 } 00:16:54.194 ] 00:16:54.194 }' 00:16:54.194 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.452 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.452 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.452 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.452 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:54.452 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:54.452 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:54.452 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=563 00:16:54.452 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:54.452 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.452 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.452 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.452 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.452 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.452 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.452 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.452 11:26:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.452 11:26:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.452 11:26:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.452 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.452 "name": "raid_bdev1", 00:16:54.452 "uuid": "9d72dcf8-01a1-4959-8ecf-96f2eb40fd48", 00:16:54.452 "strip_size_kb": 64, 00:16:54.452 "state": "online", 00:16:54.452 "raid_level": "raid5f", 00:16:54.452 "superblock": false, 00:16:54.452 "num_base_bdevs": 3, 00:16:54.452 "num_base_bdevs_discovered": 3, 00:16:54.452 "num_base_bdevs_operational": 3, 00:16:54.452 "process": { 00:16:54.452 "type": "rebuild", 00:16:54.452 "target": "spare", 00:16:54.452 "progress": { 00:16:54.452 "blocks": 22528, 00:16:54.452 "percent": 17 00:16:54.452 } 00:16:54.452 }, 00:16:54.452 "base_bdevs_list": [ 00:16:54.452 { 00:16:54.452 "name": "spare", 00:16:54.452 "uuid": "05187db2-55a8-554b-82dd-47d0e71bb364", 00:16:54.452 "is_configured": true, 00:16:54.452 "data_offset": 0, 00:16:54.452 "data_size": 65536 00:16:54.452 }, 00:16:54.452 { 00:16:54.452 "name": "BaseBdev2", 00:16:54.452 "uuid": "926ccad7-a49d-541b-b71d-f2722608e70a", 00:16:54.452 "is_configured": true, 00:16:54.452 "data_offset": 0, 00:16:54.452 "data_size": 65536 00:16:54.452 }, 00:16:54.452 { 00:16:54.452 "name": "BaseBdev3", 00:16:54.452 "uuid": "6ca1591d-c9f9-5c39-a00b-87e48fa621ac", 00:16:54.452 "is_configured": true, 00:16:54.452 "data_offset": 0, 00:16:54.452 "data_size": 65536 00:16:54.452 } 00:16:54.452 ] 00:16:54.452 }' 00:16:54.452 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.452 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.452 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.452 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.452 11:26:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:55.826 11:26:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:55.826 11:26:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.826 11:26:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.826 11:26:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.826 11:26:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.826 11:26:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.826 11:26:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.826 11:26:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.826 11:26:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.826 11:26:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.826 11:26:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.826 11:26:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.826 "name": "raid_bdev1", 00:16:55.826 "uuid": "9d72dcf8-01a1-4959-8ecf-96f2eb40fd48", 00:16:55.826 "strip_size_kb": 64, 00:16:55.826 "state": "online", 00:16:55.826 "raid_level": "raid5f", 00:16:55.826 "superblock": false, 00:16:55.826 "num_base_bdevs": 3, 00:16:55.826 "num_base_bdevs_discovered": 3, 00:16:55.826 "num_base_bdevs_operational": 3, 00:16:55.826 "process": { 00:16:55.826 "type": "rebuild", 00:16:55.826 "target": "spare", 00:16:55.826 "progress": { 00:16:55.826 "blocks": 45056, 00:16:55.826 "percent": 34 00:16:55.826 } 00:16:55.826 }, 00:16:55.826 "base_bdevs_list": [ 00:16:55.826 { 00:16:55.826 "name": "spare", 00:16:55.826 "uuid": "05187db2-55a8-554b-82dd-47d0e71bb364", 00:16:55.826 "is_configured": true, 00:16:55.826 "data_offset": 0, 00:16:55.826 "data_size": 65536 00:16:55.826 }, 00:16:55.826 { 00:16:55.826 "name": "BaseBdev2", 00:16:55.826 "uuid": "926ccad7-a49d-541b-b71d-f2722608e70a", 00:16:55.826 "is_configured": true, 00:16:55.826 "data_offset": 0, 00:16:55.826 "data_size": 65536 00:16:55.826 }, 00:16:55.826 { 00:16:55.826 "name": "BaseBdev3", 00:16:55.826 "uuid": "6ca1591d-c9f9-5c39-a00b-87e48fa621ac", 00:16:55.826 "is_configured": true, 00:16:55.826 "data_offset": 0, 00:16:55.826 "data_size": 65536 00:16:55.826 } 00:16:55.826 ] 00:16:55.826 }' 00:16:55.826 11:26:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.826 11:26:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.826 11:26:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.826 11:26:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.826 11:26:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:56.759 11:26:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:56.759 11:26:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:56.759 11:26:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.759 11:26:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:56.759 11:26:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:56.759 11:26:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.759 11:26:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.759 11:26:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.760 11:26:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.760 11:26:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.760 11:26:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.760 11:26:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.760 "name": "raid_bdev1", 00:16:56.760 "uuid": "9d72dcf8-01a1-4959-8ecf-96f2eb40fd48", 00:16:56.760 "strip_size_kb": 64, 00:16:56.760 "state": "online", 00:16:56.760 "raid_level": "raid5f", 00:16:56.760 "superblock": false, 00:16:56.760 "num_base_bdevs": 3, 00:16:56.760 "num_base_bdevs_discovered": 3, 00:16:56.760 "num_base_bdevs_operational": 3, 00:16:56.760 "process": { 00:16:56.760 "type": "rebuild", 00:16:56.760 "target": "spare", 00:16:56.760 "progress": { 00:16:56.760 "blocks": 67584, 00:16:56.760 "percent": 51 00:16:56.760 } 00:16:56.760 }, 00:16:56.760 "base_bdevs_list": [ 00:16:56.760 { 00:16:56.760 "name": "spare", 00:16:56.760 "uuid": "05187db2-55a8-554b-82dd-47d0e71bb364", 00:16:56.760 "is_configured": true, 00:16:56.760 "data_offset": 0, 00:16:56.760 "data_size": 65536 00:16:56.760 }, 00:16:56.760 { 00:16:56.760 "name": "BaseBdev2", 00:16:56.760 "uuid": "926ccad7-a49d-541b-b71d-f2722608e70a", 00:16:56.760 "is_configured": true, 00:16:56.760 "data_offset": 0, 00:16:56.760 "data_size": 65536 00:16:56.760 }, 00:16:56.760 { 00:16:56.760 "name": "BaseBdev3", 00:16:56.760 "uuid": "6ca1591d-c9f9-5c39-a00b-87e48fa621ac", 00:16:56.760 "is_configured": true, 00:16:56.760 "data_offset": 0, 00:16:56.760 "data_size": 65536 00:16:56.760 } 00:16:56.760 ] 00:16:56.760 }' 00:16:56.760 11:26:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.760 11:26:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:56.760 11:26:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.760 11:26:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:56.760 11:26:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:58.137 11:26:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:58.137 11:26:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.137 11:26:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.137 11:26:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.137 11:26:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.138 11:26:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.138 11:26:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.138 11:26:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.138 11:26:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.138 11:26:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.138 11:26:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.138 11:26:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.138 "name": "raid_bdev1", 00:16:58.138 "uuid": "9d72dcf8-01a1-4959-8ecf-96f2eb40fd48", 00:16:58.138 "strip_size_kb": 64, 00:16:58.138 "state": "online", 00:16:58.138 "raid_level": "raid5f", 00:16:58.138 "superblock": false, 00:16:58.138 "num_base_bdevs": 3, 00:16:58.138 "num_base_bdevs_discovered": 3, 00:16:58.138 "num_base_bdevs_operational": 3, 00:16:58.138 "process": { 00:16:58.138 "type": "rebuild", 00:16:58.138 "target": "spare", 00:16:58.138 "progress": { 00:16:58.138 "blocks": 92160, 00:16:58.138 "percent": 70 00:16:58.138 } 00:16:58.138 }, 00:16:58.138 "base_bdevs_list": [ 00:16:58.138 { 00:16:58.138 "name": "spare", 00:16:58.138 "uuid": "05187db2-55a8-554b-82dd-47d0e71bb364", 00:16:58.138 "is_configured": true, 00:16:58.138 "data_offset": 0, 00:16:58.138 "data_size": 65536 00:16:58.138 }, 00:16:58.138 { 00:16:58.138 "name": "BaseBdev2", 00:16:58.138 "uuid": "926ccad7-a49d-541b-b71d-f2722608e70a", 00:16:58.138 "is_configured": true, 00:16:58.138 "data_offset": 0, 00:16:58.138 "data_size": 65536 00:16:58.138 }, 00:16:58.138 { 00:16:58.138 "name": "BaseBdev3", 00:16:58.138 "uuid": "6ca1591d-c9f9-5c39-a00b-87e48fa621ac", 00:16:58.138 "is_configured": true, 00:16:58.138 "data_offset": 0, 00:16:58.138 "data_size": 65536 00:16:58.138 } 00:16:58.138 ] 00:16:58.138 }' 00:16:58.138 11:26:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.138 11:26:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.138 11:26:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.138 11:26:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.138 11:26:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:59.076 11:26:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.076 11:26:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.076 11:26:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.076 11:26:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.076 11:26:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.076 11:26:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.076 11:26:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.076 11:26:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.076 11:26:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.076 11:26:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.076 11:26:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.076 11:26:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.076 "name": "raid_bdev1", 00:16:59.076 "uuid": "9d72dcf8-01a1-4959-8ecf-96f2eb40fd48", 00:16:59.076 "strip_size_kb": 64, 00:16:59.076 "state": "online", 00:16:59.076 "raid_level": "raid5f", 00:16:59.076 "superblock": false, 00:16:59.076 "num_base_bdevs": 3, 00:16:59.076 "num_base_bdevs_discovered": 3, 00:16:59.076 "num_base_bdevs_operational": 3, 00:16:59.076 "process": { 00:16:59.076 "type": "rebuild", 00:16:59.076 "target": "spare", 00:16:59.076 "progress": { 00:16:59.076 "blocks": 114688, 00:16:59.076 "percent": 87 00:16:59.076 } 00:16:59.076 }, 00:16:59.076 "base_bdevs_list": [ 00:16:59.076 { 00:16:59.076 "name": "spare", 00:16:59.076 "uuid": "05187db2-55a8-554b-82dd-47d0e71bb364", 00:16:59.076 "is_configured": true, 00:16:59.076 "data_offset": 0, 00:16:59.076 "data_size": 65536 00:16:59.076 }, 00:16:59.076 { 00:16:59.077 "name": "BaseBdev2", 00:16:59.077 "uuid": "926ccad7-a49d-541b-b71d-f2722608e70a", 00:16:59.077 "is_configured": true, 00:16:59.077 "data_offset": 0, 00:16:59.077 "data_size": 65536 00:16:59.077 }, 00:16:59.077 { 00:16:59.077 "name": "BaseBdev3", 00:16:59.077 "uuid": "6ca1591d-c9f9-5c39-a00b-87e48fa621ac", 00:16:59.077 "is_configured": true, 00:16:59.077 "data_offset": 0, 00:16:59.077 "data_size": 65536 00:16:59.077 } 00:16:59.077 ] 00:16:59.077 }' 00:16:59.077 11:26:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.077 11:26:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.077 11:26:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.077 11:26:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.077 11:26:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:59.645 [2024-11-20 11:26:42.719615] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:59.645 [2024-11-20 11:26:42.719750] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:59.645 [2024-11-20 11:26:42.719831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.215 "name": "raid_bdev1", 00:17:00.215 "uuid": "9d72dcf8-01a1-4959-8ecf-96f2eb40fd48", 00:17:00.215 "strip_size_kb": 64, 00:17:00.215 "state": "online", 00:17:00.215 "raid_level": "raid5f", 00:17:00.215 "superblock": false, 00:17:00.215 "num_base_bdevs": 3, 00:17:00.215 "num_base_bdevs_discovered": 3, 00:17:00.215 "num_base_bdevs_operational": 3, 00:17:00.215 "base_bdevs_list": [ 00:17:00.215 { 00:17:00.215 "name": "spare", 00:17:00.215 "uuid": "05187db2-55a8-554b-82dd-47d0e71bb364", 00:17:00.215 "is_configured": true, 00:17:00.215 "data_offset": 0, 00:17:00.215 "data_size": 65536 00:17:00.215 }, 00:17:00.215 { 00:17:00.215 "name": "BaseBdev2", 00:17:00.215 "uuid": "926ccad7-a49d-541b-b71d-f2722608e70a", 00:17:00.215 "is_configured": true, 00:17:00.215 "data_offset": 0, 00:17:00.215 "data_size": 65536 00:17:00.215 }, 00:17:00.215 { 00:17:00.215 "name": "BaseBdev3", 00:17:00.215 "uuid": "6ca1591d-c9f9-5c39-a00b-87e48fa621ac", 00:17:00.215 "is_configured": true, 00:17:00.215 "data_offset": 0, 00:17:00.215 "data_size": 65536 00:17:00.215 } 00:17:00.215 ] 00:17:00.215 }' 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.215 "name": "raid_bdev1", 00:17:00.215 "uuid": "9d72dcf8-01a1-4959-8ecf-96f2eb40fd48", 00:17:00.215 "strip_size_kb": 64, 00:17:00.215 "state": "online", 00:17:00.215 "raid_level": "raid5f", 00:17:00.215 "superblock": false, 00:17:00.215 "num_base_bdevs": 3, 00:17:00.215 "num_base_bdevs_discovered": 3, 00:17:00.215 "num_base_bdevs_operational": 3, 00:17:00.215 "base_bdevs_list": [ 00:17:00.215 { 00:17:00.215 "name": "spare", 00:17:00.215 "uuid": "05187db2-55a8-554b-82dd-47d0e71bb364", 00:17:00.215 "is_configured": true, 00:17:00.215 "data_offset": 0, 00:17:00.215 "data_size": 65536 00:17:00.215 }, 00:17:00.215 { 00:17:00.215 "name": "BaseBdev2", 00:17:00.215 "uuid": "926ccad7-a49d-541b-b71d-f2722608e70a", 00:17:00.215 "is_configured": true, 00:17:00.215 "data_offset": 0, 00:17:00.215 "data_size": 65536 00:17:00.215 }, 00:17:00.215 { 00:17:00.215 "name": "BaseBdev3", 00:17:00.215 "uuid": "6ca1591d-c9f9-5c39-a00b-87e48fa621ac", 00:17:00.215 "is_configured": true, 00:17:00.215 "data_offset": 0, 00:17:00.215 "data_size": 65536 00:17:00.215 } 00:17:00.215 ] 00:17:00.215 }' 00:17:00.215 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.475 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:00.475 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.475 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:00.475 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:00.475 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.475 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.475 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.475 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.475 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:00.475 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.475 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.475 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.475 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.475 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.475 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.475 11:26:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.475 11:26:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.475 11:26:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.475 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.475 "name": "raid_bdev1", 00:17:00.475 "uuid": "9d72dcf8-01a1-4959-8ecf-96f2eb40fd48", 00:17:00.475 "strip_size_kb": 64, 00:17:00.475 "state": "online", 00:17:00.475 "raid_level": "raid5f", 00:17:00.475 "superblock": false, 00:17:00.475 "num_base_bdevs": 3, 00:17:00.475 "num_base_bdevs_discovered": 3, 00:17:00.475 "num_base_bdevs_operational": 3, 00:17:00.475 "base_bdevs_list": [ 00:17:00.475 { 00:17:00.475 "name": "spare", 00:17:00.475 "uuid": "05187db2-55a8-554b-82dd-47d0e71bb364", 00:17:00.475 "is_configured": true, 00:17:00.475 "data_offset": 0, 00:17:00.475 "data_size": 65536 00:17:00.475 }, 00:17:00.475 { 00:17:00.475 "name": "BaseBdev2", 00:17:00.475 "uuid": "926ccad7-a49d-541b-b71d-f2722608e70a", 00:17:00.475 "is_configured": true, 00:17:00.475 "data_offset": 0, 00:17:00.475 "data_size": 65536 00:17:00.475 }, 00:17:00.475 { 00:17:00.475 "name": "BaseBdev3", 00:17:00.475 "uuid": "6ca1591d-c9f9-5c39-a00b-87e48fa621ac", 00:17:00.475 "is_configured": true, 00:17:00.475 "data_offset": 0, 00:17:00.475 "data_size": 65536 00:17:00.475 } 00:17:00.475 ] 00:17:00.475 }' 00:17:00.475 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.475 11:26:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.045 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:01.045 11:26:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.045 11:26:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.045 [2024-11-20 11:26:43.889411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:01.045 [2024-11-20 11:26:43.889532] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:01.045 [2024-11-20 11:26:43.889672] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:01.045 [2024-11-20 11:26:43.889784] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:01.045 [2024-11-20 11:26:43.889805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:01.045 11:26:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.045 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:01.045 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.045 11:26:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.045 11:26:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.045 11:26:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.045 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:01.045 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:01.045 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:01.046 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:01.046 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:01.046 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:01.046 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:01.046 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:01.046 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:01.046 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:01.046 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:01.046 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:01.046 11:26:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:01.306 /dev/nbd0 00:17:01.306 11:26:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:01.306 11:26:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:01.306 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:01.306 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:01.306 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:01.306 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:01.306 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:01.306 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:01.306 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:01.306 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:01.306 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:01.306 1+0 records in 00:17:01.306 1+0 records out 00:17:01.306 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381124 s, 10.7 MB/s 00:17:01.306 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.306 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:01.306 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.306 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:01.306 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:01.306 11:26:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:01.306 11:26:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:01.306 11:26:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:01.566 /dev/nbd1 00:17:01.566 11:26:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:01.566 11:26:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:01.566 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:01.566 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:01.566 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:01.566 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:01.566 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:01.566 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:01.566 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:01.566 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:01.566 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:01.566 1+0 records in 00:17:01.566 1+0 records out 00:17:01.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578693 s, 7.1 MB/s 00:17:01.566 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.566 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:01.566 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.566 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:01.566 11:26:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:01.566 11:26:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:01.566 11:26:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:01.566 11:26:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:01.825 11:26:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:01.825 11:26:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:01.825 11:26:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:01.825 11:26:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:01.825 11:26:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:01.825 11:26:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:01.825 11:26:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:02.095 11:26:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:02.095 11:26:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:02.095 11:26:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:02.095 11:26:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:02.095 11:26:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:02.095 11:26:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:02.095 11:26:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:02.095 11:26:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:02.095 11:26:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:02.095 11:26:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:02.355 11:26:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:02.355 11:26:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:02.355 11:26:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:02.355 11:26:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:02.355 11:26:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:02.355 11:26:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:02.355 11:26:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:02.355 11:26:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:02.355 11:26:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:02.355 11:26:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81760 00:17:02.355 11:26:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81760 ']' 00:17:02.355 11:26:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81760 00:17:02.355 11:26:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:02.355 11:26:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:02.355 11:26:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81760 00:17:02.355 11:26:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:02.355 11:26:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:02.355 killing process with pid 81760 00:17:02.355 11:26:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81760' 00:17:02.355 11:26:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81760 00:17:02.355 Received shutdown signal, test time was about 60.000000 seconds 00:17:02.355 00:17:02.355 Latency(us) 00:17:02.355 [2024-11-20T11:26:45.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.355 [2024-11-20T11:26:45.471Z] =================================================================================================================== 00:17:02.355 [2024-11-20T11:26:45.471Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:02.355 [2024-11-20 11:26:45.337112] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:02.355 11:26:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81760 00:17:02.940 [2024-11-20 11:26:45.822123] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:04.318 00:17:04.318 real 0m16.211s 00:17:04.318 user 0m19.962s 00:17:04.318 sys 0m2.184s 00:17:04.318 ************************************ 00:17:04.318 END TEST raid5f_rebuild_test 00:17:04.318 ************************************ 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.318 11:26:47 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:17:04.318 11:26:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:04.318 11:26:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:04.318 11:26:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:04.318 ************************************ 00:17:04.318 START TEST raid5f_rebuild_test_sb 00:17:04.318 ************************************ 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82212 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82212 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82212 ']' 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.318 11:26:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.318 [2024-11-20 11:26:47.324080] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:17:04.318 [2024-11-20 11:26:47.324299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:04.318 Zero copy mechanism will not be used. 00:17:04.318 -allocations --file-prefix=spdk_pid82212 ] 00:17:04.576 [2024-11-20 11:26:47.505623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.576 [2024-11-20 11:26:47.638000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.836 [2024-11-20 11:26:47.840721] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:04.836 [2024-11-20 11:26:47.840872] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.094 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.094 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:05.094 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:05.094 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:05.094 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.094 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.372 BaseBdev1_malloc 00:17:05.372 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.372 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:05.372 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.372 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.372 [2024-11-20 11:26:48.226379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:05.372 [2024-11-20 11:26:48.226499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.372 [2024-11-20 11:26:48.226532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:05.372 [2024-11-20 11:26:48.226544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.372 [2024-11-20 11:26:48.228865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.372 [2024-11-20 11:26:48.228908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:05.372 BaseBdev1 00:17:05.372 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.372 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:05.372 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:05.372 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.372 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.372 BaseBdev2_malloc 00:17:05.372 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.372 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:05.372 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.372 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.372 [2024-11-20 11:26:48.284781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:05.372 [2024-11-20 11:26:48.284931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.372 [2024-11-20 11:26:48.284963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:05.372 [2024-11-20 11:26:48.284980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.372 [2024-11-20 11:26:48.287614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.372 [2024-11-20 11:26:48.287655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:05.372 BaseBdev2 00:17:05.372 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.373 BaseBdev3_malloc 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.373 [2024-11-20 11:26:48.353630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:05.373 [2024-11-20 11:26:48.353686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.373 [2024-11-20 11:26:48.353710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:05.373 [2024-11-20 11:26:48.353721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.373 [2024-11-20 11:26:48.355912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.373 [2024-11-20 11:26:48.355959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:05.373 BaseBdev3 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.373 spare_malloc 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.373 spare_delay 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.373 [2024-11-20 11:26:48.420085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:05.373 [2024-11-20 11:26:48.420149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.373 [2024-11-20 11:26:48.420170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:05.373 [2024-11-20 11:26:48.420183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.373 [2024-11-20 11:26:48.422407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.373 [2024-11-20 11:26:48.422466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:05.373 spare 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.373 [2024-11-20 11:26:48.432157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:05.373 [2024-11-20 11:26:48.433961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:05.373 [2024-11-20 11:26:48.434026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:05.373 [2024-11-20 11:26:48.434216] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:05.373 [2024-11-20 11:26:48.434231] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:05.373 [2024-11-20 11:26:48.434520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:05.373 [2024-11-20 11:26:48.440093] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:05.373 [2024-11-20 11:26:48.440117] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:05.373 [2024-11-20 11:26:48.440331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.373 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.648 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.648 "name": "raid_bdev1", 00:17:05.648 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:05.648 "strip_size_kb": 64, 00:17:05.648 "state": "online", 00:17:05.648 "raid_level": "raid5f", 00:17:05.648 "superblock": true, 00:17:05.648 "num_base_bdevs": 3, 00:17:05.648 "num_base_bdevs_discovered": 3, 00:17:05.648 "num_base_bdevs_operational": 3, 00:17:05.648 "base_bdevs_list": [ 00:17:05.648 { 00:17:05.648 "name": "BaseBdev1", 00:17:05.648 "uuid": "cf256127-a15e-5325-97b3-bd5a9085f257", 00:17:05.648 "is_configured": true, 00:17:05.648 "data_offset": 2048, 00:17:05.648 "data_size": 63488 00:17:05.648 }, 00:17:05.648 { 00:17:05.648 "name": "BaseBdev2", 00:17:05.648 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:05.648 "is_configured": true, 00:17:05.648 "data_offset": 2048, 00:17:05.648 "data_size": 63488 00:17:05.648 }, 00:17:05.648 { 00:17:05.648 "name": "BaseBdev3", 00:17:05.648 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:05.648 "is_configured": true, 00:17:05.648 "data_offset": 2048, 00:17:05.648 "data_size": 63488 00:17:05.648 } 00:17:05.648 ] 00:17:05.648 }' 00:17:05.648 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.648 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.908 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:05.908 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:05.908 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.908 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.908 [2024-11-20 11:26:48.922102] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.908 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.908 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:17:05.908 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.908 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:05.908 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.908 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.908 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.908 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:05.908 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:05.908 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:05.908 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:05.908 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:05.908 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:05.908 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:05.908 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:05.908 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:05.908 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:05.908 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:05.908 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:05.908 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:05.908 11:26:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:06.167 [2024-11-20 11:26:49.201456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:06.167 /dev/nbd0 00:17:06.167 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:06.167 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:06.167 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:06.167 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:06.167 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:06.167 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:06.167 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:06.167 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:06.167 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:06.167 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:06.167 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:06.167 1+0 records in 00:17:06.167 1+0 records out 00:17:06.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434009 s, 9.4 MB/s 00:17:06.167 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.167 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:06.167 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.426 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:06.426 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:06.426 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:06.426 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:06.426 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:06.426 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:06.426 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:06.426 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:17:06.686 496+0 records in 00:17:06.686 496+0 records out 00:17:06.686 65011712 bytes (65 MB, 62 MiB) copied, 0.483621 s, 134 MB/s 00:17:06.686 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:06.686 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:06.686 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:06.686 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:06.686 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:06.686 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:06.686 11:26:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:06.945 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:06.945 [2024-11-20 11:26:50.014748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.945 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:06.945 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:06.945 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:06.945 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:06.945 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:06.945 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:06.945 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:06.945 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:06.945 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.945 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.945 [2024-11-20 11:26:50.031482] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:06.945 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.945 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:06.945 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.945 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.945 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.945 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.945 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:06.945 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.945 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.945 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.945 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.946 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.946 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.946 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.946 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.946 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.205 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.205 "name": "raid_bdev1", 00:17:07.205 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:07.205 "strip_size_kb": 64, 00:17:07.205 "state": "online", 00:17:07.205 "raid_level": "raid5f", 00:17:07.205 "superblock": true, 00:17:07.205 "num_base_bdevs": 3, 00:17:07.205 "num_base_bdevs_discovered": 2, 00:17:07.205 "num_base_bdevs_operational": 2, 00:17:07.205 "base_bdevs_list": [ 00:17:07.205 { 00:17:07.205 "name": null, 00:17:07.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.205 "is_configured": false, 00:17:07.205 "data_offset": 0, 00:17:07.205 "data_size": 63488 00:17:07.205 }, 00:17:07.205 { 00:17:07.205 "name": "BaseBdev2", 00:17:07.205 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:07.205 "is_configured": true, 00:17:07.205 "data_offset": 2048, 00:17:07.205 "data_size": 63488 00:17:07.205 }, 00:17:07.205 { 00:17:07.205 "name": "BaseBdev3", 00:17:07.205 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:07.205 "is_configured": true, 00:17:07.205 "data_offset": 2048, 00:17:07.205 "data_size": 63488 00:17:07.205 } 00:17:07.205 ] 00:17:07.205 }' 00:17:07.205 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.206 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.465 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:07.465 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.465 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.465 [2024-11-20 11:26:50.482751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:07.465 [2024-11-20 11:26:50.504508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:17:07.465 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.465 11:26:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:07.465 [2024-11-20 11:26:50.515028] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:08.403 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.403 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.403 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.403 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.403 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.403 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.403 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.403 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.403 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.662 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.662 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.662 "name": "raid_bdev1", 00:17:08.662 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:08.662 "strip_size_kb": 64, 00:17:08.662 "state": "online", 00:17:08.662 "raid_level": "raid5f", 00:17:08.662 "superblock": true, 00:17:08.662 "num_base_bdevs": 3, 00:17:08.662 "num_base_bdevs_discovered": 3, 00:17:08.662 "num_base_bdevs_operational": 3, 00:17:08.662 "process": { 00:17:08.662 "type": "rebuild", 00:17:08.662 "target": "spare", 00:17:08.662 "progress": { 00:17:08.662 "blocks": 18432, 00:17:08.662 "percent": 14 00:17:08.662 } 00:17:08.662 }, 00:17:08.662 "base_bdevs_list": [ 00:17:08.662 { 00:17:08.662 "name": "spare", 00:17:08.662 "uuid": "2aba24b4-edcf-5288-8db4-26c1b67f8cb5", 00:17:08.662 "is_configured": true, 00:17:08.662 "data_offset": 2048, 00:17:08.662 "data_size": 63488 00:17:08.662 }, 00:17:08.662 { 00:17:08.662 "name": "BaseBdev2", 00:17:08.662 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:08.662 "is_configured": true, 00:17:08.662 "data_offset": 2048, 00:17:08.662 "data_size": 63488 00:17:08.662 }, 00:17:08.662 { 00:17:08.662 "name": "BaseBdev3", 00:17:08.662 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:08.662 "is_configured": true, 00:17:08.662 "data_offset": 2048, 00:17:08.662 "data_size": 63488 00:17:08.662 } 00:17:08.662 ] 00:17:08.662 }' 00:17:08.662 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.662 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.662 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.662 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.662 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:08.662 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.662 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.662 [2024-11-20 11:26:51.675188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:08.662 [2024-11-20 11:26:51.727422] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:08.662 [2024-11-20 11:26:51.727634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.662 [2024-11-20 11:26:51.727696] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:08.662 [2024-11-20 11:26:51.727744] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:08.662 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.921 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:08.921 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.921 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.922 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.922 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.922 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:08.922 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.922 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.922 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.922 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.922 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.922 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.922 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.922 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.922 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.922 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.922 "name": "raid_bdev1", 00:17:08.922 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:08.922 "strip_size_kb": 64, 00:17:08.922 "state": "online", 00:17:08.922 "raid_level": "raid5f", 00:17:08.922 "superblock": true, 00:17:08.922 "num_base_bdevs": 3, 00:17:08.922 "num_base_bdevs_discovered": 2, 00:17:08.922 "num_base_bdevs_operational": 2, 00:17:08.922 "base_bdevs_list": [ 00:17:08.922 { 00:17:08.922 "name": null, 00:17:08.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.922 "is_configured": false, 00:17:08.922 "data_offset": 0, 00:17:08.922 "data_size": 63488 00:17:08.922 }, 00:17:08.922 { 00:17:08.922 "name": "BaseBdev2", 00:17:08.922 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:08.922 "is_configured": true, 00:17:08.922 "data_offset": 2048, 00:17:08.922 "data_size": 63488 00:17:08.922 }, 00:17:08.922 { 00:17:08.922 "name": "BaseBdev3", 00:17:08.922 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:08.922 "is_configured": true, 00:17:08.922 "data_offset": 2048, 00:17:08.922 "data_size": 63488 00:17:08.922 } 00:17:08.922 ] 00:17:08.922 }' 00:17:08.922 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.922 11:26:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.182 11:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:09.182 11:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.182 11:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:09.182 11:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:09.182 11:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.182 11:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.182 11:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.182 11:26:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.182 11:26:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.182 11:26:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.182 11:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.182 "name": "raid_bdev1", 00:17:09.182 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:09.182 "strip_size_kb": 64, 00:17:09.182 "state": "online", 00:17:09.182 "raid_level": "raid5f", 00:17:09.182 "superblock": true, 00:17:09.182 "num_base_bdevs": 3, 00:17:09.182 "num_base_bdevs_discovered": 2, 00:17:09.182 "num_base_bdevs_operational": 2, 00:17:09.182 "base_bdevs_list": [ 00:17:09.182 { 00:17:09.182 "name": null, 00:17:09.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.182 "is_configured": false, 00:17:09.182 "data_offset": 0, 00:17:09.182 "data_size": 63488 00:17:09.182 }, 00:17:09.182 { 00:17:09.182 "name": "BaseBdev2", 00:17:09.182 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:09.182 "is_configured": true, 00:17:09.182 "data_offset": 2048, 00:17:09.182 "data_size": 63488 00:17:09.182 }, 00:17:09.182 { 00:17:09.182 "name": "BaseBdev3", 00:17:09.182 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:09.182 "is_configured": true, 00:17:09.182 "data_offset": 2048, 00:17:09.182 "data_size": 63488 00:17:09.182 } 00:17:09.182 ] 00:17:09.182 }' 00:17:09.182 11:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.442 11:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:09.442 11:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.442 11:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:09.442 11:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:09.442 11:26:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.442 11:26:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.442 [2024-11-20 11:26:52.385671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:09.442 [2024-11-20 11:26:52.406035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:17:09.442 11:26:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.442 11:26:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:09.442 [2024-11-20 11:26:52.415864] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:10.380 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.380 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.380 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.380 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.380 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.380 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.380 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.380 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.380 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.380 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.380 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.380 "name": "raid_bdev1", 00:17:10.380 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:10.380 "strip_size_kb": 64, 00:17:10.380 "state": "online", 00:17:10.380 "raid_level": "raid5f", 00:17:10.380 "superblock": true, 00:17:10.380 "num_base_bdevs": 3, 00:17:10.380 "num_base_bdevs_discovered": 3, 00:17:10.380 "num_base_bdevs_operational": 3, 00:17:10.380 "process": { 00:17:10.380 "type": "rebuild", 00:17:10.380 "target": "spare", 00:17:10.380 "progress": { 00:17:10.380 "blocks": 18432, 00:17:10.380 "percent": 14 00:17:10.380 } 00:17:10.380 }, 00:17:10.380 "base_bdevs_list": [ 00:17:10.380 { 00:17:10.380 "name": "spare", 00:17:10.380 "uuid": "2aba24b4-edcf-5288-8db4-26c1b67f8cb5", 00:17:10.380 "is_configured": true, 00:17:10.380 "data_offset": 2048, 00:17:10.380 "data_size": 63488 00:17:10.380 }, 00:17:10.380 { 00:17:10.380 "name": "BaseBdev2", 00:17:10.380 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:10.380 "is_configured": true, 00:17:10.380 "data_offset": 2048, 00:17:10.380 "data_size": 63488 00:17:10.380 }, 00:17:10.380 { 00:17:10.380 "name": "BaseBdev3", 00:17:10.380 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:10.380 "is_configured": true, 00:17:10.380 "data_offset": 2048, 00:17:10.380 "data_size": 63488 00:17:10.380 } 00:17:10.380 ] 00:17:10.380 }' 00:17:10.380 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.380 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.641 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.641 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.641 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:10.641 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:10.641 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:10.641 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:10.641 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:10.641 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=579 00:17:10.641 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:10.641 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.641 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.641 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.641 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.641 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.641 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.641 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.641 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.641 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.641 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.641 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.641 "name": "raid_bdev1", 00:17:10.641 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:10.641 "strip_size_kb": 64, 00:17:10.641 "state": "online", 00:17:10.641 "raid_level": "raid5f", 00:17:10.641 "superblock": true, 00:17:10.641 "num_base_bdevs": 3, 00:17:10.641 "num_base_bdevs_discovered": 3, 00:17:10.641 "num_base_bdevs_operational": 3, 00:17:10.641 "process": { 00:17:10.641 "type": "rebuild", 00:17:10.641 "target": "spare", 00:17:10.641 "progress": { 00:17:10.641 "blocks": 22528, 00:17:10.641 "percent": 17 00:17:10.641 } 00:17:10.641 }, 00:17:10.641 "base_bdevs_list": [ 00:17:10.641 { 00:17:10.641 "name": "spare", 00:17:10.641 "uuid": "2aba24b4-edcf-5288-8db4-26c1b67f8cb5", 00:17:10.641 "is_configured": true, 00:17:10.641 "data_offset": 2048, 00:17:10.641 "data_size": 63488 00:17:10.641 }, 00:17:10.641 { 00:17:10.641 "name": "BaseBdev2", 00:17:10.641 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:10.641 "is_configured": true, 00:17:10.641 "data_offset": 2048, 00:17:10.641 "data_size": 63488 00:17:10.641 }, 00:17:10.641 { 00:17:10.641 "name": "BaseBdev3", 00:17:10.641 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:10.641 "is_configured": true, 00:17:10.641 "data_offset": 2048, 00:17:10.641 "data_size": 63488 00:17:10.641 } 00:17:10.641 ] 00:17:10.641 }' 00:17:10.641 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.641 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.641 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.641 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.641 11:26:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:11.580 11:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:11.580 11:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.580 11:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.580 11:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.580 11:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.580 11:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.580 11:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.580 11:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.580 11:26:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.580 11:26:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.865 11:26:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.865 11:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.865 "name": "raid_bdev1", 00:17:11.865 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:11.865 "strip_size_kb": 64, 00:17:11.865 "state": "online", 00:17:11.865 "raid_level": "raid5f", 00:17:11.865 "superblock": true, 00:17:11.865 "num_base_bdevs": 3, 00:17:11.865 "num_base_bdevs_discovered": 3, 00:17:11.865 "num_base_bdevs_operational": 3, 00:17:11.865 "process": { 00:17:11.865 "type": "rebuild", 00:17:11.865 "target": "spare", 00:17:11.865 "progress": { 00:17:11.865 "blocks": 45056, 00:17:11.865 "percent": 35 00:17:11.865 } 00:17:11.865 }, 00:17:11.865 "base_bdevs_list": [ 00:17:11.865 { 00:17:11.865 "name": "spare", 00:17:11.865 "uuid": "2aba24b4-edcf-5288-8db4-26c1b67f8cb5", 00:17:11.865 "is_configured": true, 00:17:11.865 "data_offset": 2048, 00:17:11.865 "data_size": 63488 00:17:11.865 }, 00:17:11.865 { 00:17:11.865 "name": "BaseBdev2", 00:17:11.865 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:11.865 "is_configured": true, 00:17:11.865 "data_offset": 2048, 00:17:11.865 "data_size": 63488 00:17:11.865 }, 00:17:11.865 { 00:17:11.865 "name": "BaseBdev3", 00:17:11.865 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:11.865 "is_configured": true, 00:17:11.865 "data_offset": 2048, 00:17:11.865 "data_size": 63488 00:17:11.865 } 00:17:11.865 ] 00:17:11.865 }' 00:17:11.865 11:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.865 11:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.865 11:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.865 11:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.865 11:26:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:12.801 11:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:12.801 11:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.801 11:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.801 11:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.801 11:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.801 11:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.801 11:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.801 11:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.801 11:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.801 11:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.801 11:26:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.801 11:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.801 "name": "raid_bdev1", 00:17:12.801 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:12.801 "strip_size_kb": 64, 00:17:12.801 "state": "online", 00:17:12.801 "raid_level": "raid5f", 00:17:12.801 "superblock": true, 00:17:12.801 "num_base_bdevs": 3, 00:17:12.801 "num_base_bdevs_discovered": 3, 00:17:12.801 "num_base_bdevs_operational": 3, 00:17:12.801 "process": { 00:17:12.801 "type": "rebuild", 00:17:12.801 "target": "spare", 00:17:12.801 "progress": { 00:17:12.801 "blocks": 67584, 00:17:12.801 "percent": 53 00:17:12.801 } 00:17:12.801 }, 00:17:12.801 "base_bdevs_list": [ 00:17:12.801 { 00:17:12.801 "name": "spare", 00:17:12.801 "uuid": "2aba24b4-edcf-5288-8db4-26c1b67f8cb5", 00:17:12.802 "is_configured": true, 00:17:12.802 "data_offset": 2048, 00:17:12.802 "data_size": 63488 00:17:12.802 }, 00:17:12.802 { 00:17:12.802 "name": "BaseBdev2", 00:17:12.802 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:12.802 "is_configured": true, 00:17:12.802 "data_offset": 2048, 00:17:12.802 "data_size": 63488 00:17:12.802 }, 00:17:12.802 { 00:17:12.802 "name": "BaseBdev3", 00:17:12.802 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:12.802 "is_configured": true, 00:17:12.802 "data_offset": 2048, 00:17:12.802 "data_size": 63488 00:17:12.802 } 00:17:12.802 ] 00:17:12.802 }' 00:17:12.802 11:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.060 11:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:13.060 11:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.060 11:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.060 11:26:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:14.018 11:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:14.018 11:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.018 11:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.018 11:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.018 11:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.018 11:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.018 11:26:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.018 11:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.018 11:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.018 11:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.018 11:26:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.018 11:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.018 "name": "raid_bdev1", 00:17:14.018 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:14.018 "strip_size_kb": 64, 00:17:14.018 "state": "online", 00:17:14.018 "raid_level": "raid5f", 00:17:14.018 "superblock": true, 00:17:14.018 "num_base_bdevs": 3, 00:17:14.018 "num_base_bdevs_discovered": 3, 00:17:14.018 "num_base_bdevs_operational": 3, 00:17:14.018 "process": { 00:17:14.018 "type": "rebuild", 00:17:14.018 "target": "spare", 00:17:14.018 "progress": { 00:17:14.018 "blocks": 92160, 00:17:14.018 "percent": 72 00:17:14.018 } 00:17:14.018 }, 00:17:14.018 "base_bdevs_list": [ 00:17:14.018 { 00:17:14.018 "name": "spare", 00:17:14.018 "uuid": "2aba24b4-edcf-5288-8db4-26c1b67f8cb5", 00:17:14.018 "is_configured": true, 00:17:14.018 "data_offset": 2048, 00:17:14.018 "data_size": 63488 00:17:14.018 }, 00:17:14.018 { 00:17:14.018 "name": "BaseBdev2", 00:17:14.018 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:14.018 "is_configured": true, 00:17:14.018 "data_offset": 2048, 00:17:14.018 "data_size": 63488 00:17:14.018 }, 00:17:14.018 { 00:17:14.018 "name": "BaseBdev3", 00:17:14.018 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:14.018 "is_configured": true, 00:17:14.018 "data_offset": 2048, 00:17:14.018 "data_size": 63488 00:17:14.018 } 00:17:14.018 ] 00:17:14.018 }' 00:17:14.018 11:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.018 11:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.018 11:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.277 11:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.277 11:26:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:15.213 11:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:15.213 11:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.213 11:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.213 11:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.213 11:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.213 11:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.213 11:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.213 11:26:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.213 11:26:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.213 11:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.213 11:26:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.213 11:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.213 "name": "raid_bdev1", 00:17:15.213 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:15.213 "strip_size_kb": 64, 00:17:15.213 "state": "online", 00:17:15.213 "raid_level": "raid5f", 00:17:15.213 "superblock": true, 00:17:15.213 "num_base_bdevs": 3, 00:17:15.213 "num_base_bdevs_discovered": 3, 00:17:15.213 "num_base_bdevs_operational": 3, 00:17:15.213 "process": { 00:17:15.213 "type": "rebuild", 00:17:15.213 "target": "spare", 00:17:15.213 "progress": { 00:17:15.213 "blocks": 114688, 00:17:15.213 "percent": 90 00:17:15.213 } 00:17:15.213 }, 00:17:15.213 "base_bdevs_list": [ 00:17:15.213 { 00:17:15.213 "name": "spare", 00:17:15.213 "uuid": "2aba24b4-edcf-5288-8db4-26c1b67f8cb5", 00:17:15.213 "is_configured": true, 00:17:15.213 "data_offset": 2048, 00:17:15.213 "data_size": 63488 00:17:15.213 }, 00:17:15.213 { 00:17:15.213 "name": "BaseBdev2", 00:17:15.213 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:15.213 "is_configured": true, 00:17:15.213 "data_offset": 2048, 00:17:15.213 "data_size": 63488 00:17:15.213 }, 00:17:15.213 { 00:17:15.213 "name": "BaseBdev3", 00:17:15.213 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:15.213 "is_configured": true, 00:17:15.213 "data_offset": 2048, 00:17:15.213 "data_size": 63488 00:17:15.213 } 00:17:15.213 ] 00:17:15.213 }' 00:17:15.213 11:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.213 11:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.213 11:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.213 11:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.213 11:26:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:15.782 [2024-11-20 11:26:58.678032] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:15.782 [2024-11-20 11:26:58.678145] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:15.782 [2024-11-20 11:26:58.678295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.350 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:16.350 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.350 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.350 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.350 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.350 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.350 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.350 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.350 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.350 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.350 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.350 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.350 "name": "raid_bdev1", 00:17:16.350 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:16.350 "strip_size_kb": 64, 00:17:16.350 "state": "online", 00:17:16.350 "raid_level": "raid5f", 00:17:16.350 "superblock": true, 00:17:16.350 "num_base_bdevs": 3, 00:17:16.350 "num_base_bdevs_discovered": 3, 00:17:16.350 "num_base_bdevs_operational": 3, 00:17:16.350 "base_bdevs_list": [ 00:17:16.350 { 00:17:16.350 "name": "spare", 00:17:16.350 "uuid": "2aba24b4-edcf-5288-8db4-26c1b67f8cb5", 00:17:16.350 "is_configured": true, 00:17:16.350 "data_offset": 2048, 00:17:16.350 "data_size": 63488 00:17:16.350 }, 00:17:16.350 { 00:17:16.350 "name": "BaseBdev2", 00:17:16.350 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:16.350 "is_configured": true, 00:17:16.350 "data_offset": 2048, 00:17:16.350 "data_size": 63488 00:17:16.350 }, 00:17:16.350 { 00:17:16.350 "name": "BaseBdev3", 00:17:16.350 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:16.350 "is_configured": true, 00:17:16.350 "data_offset": 2048, 00:17:16.350 "data_size": 63488 00:17:16.350 } 00:17:16.350 ] 00:17:16.350 }' 00:17:16.350 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.350 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:16.350 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.650 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:16.650 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:16.650 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:16.650 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.650 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:16.650 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:16.650 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.650 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.650 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.650 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.650 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.650 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.650 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.650 "name": "raid_bdev1", 00:17:16.650 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:16.650 "strip_size_kb": 64, 00:17:16.650 "state": "online", 00:17:16.650 "raid_level": "raid5f", 00:17:16.650 "superblock": true, 00:17:16.650 "num_base_bdevs": 3, 00:17:16.650 "num_base_bdevs_discovered": 3, 00:17:16.650 "num_base_bdevs_operational": 3, 00:17:16.650 "base_bdevs_list": [ 00:17:16.650 { 00:17:16.650 "name": "spare", 00:17:16.650 "uuid": "2aba24b4-edcf-5288-8db4-26c1b67f8cb5", 00:17:16.650 "is_configured": true, 00:17:16.650 "data_offset": 2048, 00:17:16.650 "data_size": 63488 00:17:16.650 }, 00:17:16.650 { 00:17:16.650 "name": "BaseBdev2", 00:17:16.650 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:16.650 "is_configured": true, 00:17:16.650 "data_offset": 2048, 00:17:16.650 "data_size": 63488 00:17:16.650 }, 00:17:16.650 { 00:17:16.650 "name": "BaseBdev3", 00:17:16.650 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:16.650 "is_configured": true, 00:17:16.650 "data_offset": 2048, 00:17:16.650 "data_size": 63488 00:17:16.650 } 00:17:16.650 ] 00:17:16.650 }' 00:17:16.650 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.650 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:16.650 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.650 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:16.650 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:16.650 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.650 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.651 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:16.651 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:16.651 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:16.651 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.651 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.651 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.651 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.651 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.651 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.651 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.651 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.651 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.651 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.651 "name": "raid_bdev1", 00:17:16.651 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:16.651 "strip_size_kb": 64, 00:17:16.651 "state": "online", 00:17:16.651 "raid_level": "raid5f", 00:17:16.651 "superblock": true, 00:17:16.651 "num_base_bdevs": 3, 00:17:16.651 "num_base_bdevs_discovered": 3, 00:17:16.651 "num_base_bdevs_operational": 3, 00:17:16.651 "base_bdevs_list": [ 00:17:16.651 { 00:17:16.651 "name": "spare", 00:17:16.651 "uuid": "2aba24b4-edcf-5288-8db4-26c1b67f8cb5", 00:17:16.651 "is_configured": true, 00:17:16.651 "data_offset": 2048, 00:17:16.651 "data_size": 63488 00:17:16.651 }, 00:17:16.651 { 00:17:16.651 "name": "BaseBdev2", 00:17:16.651 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:16.651 "is_configured": true, 00:17:16.651 "data_offset": 2048, 00:17:16.651 "data_size": 63488 00:17:16.651 }, 00:17:16.651 { 00:17:16.651 "name": "BaseBdev3", 00:17:16.651 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:16.651 "is_configured": true, 00:17:16.651 "data_offset": 2048, 00:17:16.651 "data_size": 63488 00:17:16.651 } 00:17:16.651 ] 00:17:16.651 }' 00:17:16.651 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.651 11:26:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.219 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:17.219 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.219 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.219 [2024-11-20 11:27:00.095873] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:17.219 [2024-11-20 11:27:00.095966] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:17.219 [2024-11-20 11:27:00.096203] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:17.219 [2024-11-20 11:27:00.096375] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:17.219 [2024-11-20 11:27:00.096406] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:17.219 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.219 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.219 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:17.219 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.219 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.219 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.219 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:17.219 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:17.219 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:17.219 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:17.219 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:17.219 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:17.219 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:17.219 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:17.219 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:17.219 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:17.219 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:17.219 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:17.219 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:17.478 /dev/nbd0 00:17:17.478 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:17.478 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:17.478 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:17.478 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:17.478 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:17.478 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:17.478 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:17.478 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:17.478 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:17.478 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:17.478 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:17.478 1+0 records in 00:17:17.478 1+0 records out 00:17:17.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354742 s, 11.5 MB/s 00:17:17.478 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.478 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:17.478 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.478 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:17.478 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:17.478 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:17.479 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:17.479 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:17.737 /dev/nbd1 00:17:17.737 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:17.737 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:17.737 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:17.737 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:17.737 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:17.737 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:17.737 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:17.737 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:17.737 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:17.737 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:17.737 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:17.737 1+0 records in 00:17:17.737 1+0 records out 00:17:17.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000590549 s, 6.9 MB/s 00:17:17.737 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.737 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:17.737 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.737 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:17.737 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:17.737 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:17.737 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:17.737 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:17.997 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:17.997 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:17.997 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:17.997 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:17.997 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:17.997 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:17.997 11:27:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:18.256 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:18.256 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:18.256 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:18.256 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:18.256 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:18.257 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:18.257 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:18.257 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:18.257 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:18.257 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:18.516 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:18.516 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:18.516 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:18.516 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:18.516 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:18.516 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:18.516 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:18.516 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:18.516 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:18.516 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:18.516 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.516 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.516 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.516 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:18.516 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.516 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.516 [2024-11-20 11:27:01.538638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:18.516 [2024-11-20 11:27:01.538803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.516 [2024-11-20 11:27:01.538857] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:18.516 [2024-11-20 11:27:01.538904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.516 [2024-11-20 11:27:01.541838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.516 [2024-11-20 11:27:01.541959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:18.516 [2024-11-20 11:27:01.542118] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:18.516 [2024-11-20 11:27:01.542235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:18.516 [2024-11-20 11:27:01.542472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:18.516 spare 00:17:18.516 [2024-11-20 11:27:01.542649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:18.516 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.516 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:18.516 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.516 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.775 [2024-11-20 11:27:01.642628] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:18.775 [2024-11-20 11:27:01.642783] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:18.775 [2024-11-20 11:27:01.643212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:17:18.775 [2024-11-20 11:27:01.650183] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:18.775 [2024-11-20 11:27:01.650209] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:18.775 [2024-11-20 11:27:01.650479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.775 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.775 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:18.775 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.775 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.775 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:18.775 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.775 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:18.775 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.775 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.775 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.775 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.776 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.776 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.776 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.776 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.776 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.776 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.776 "name": "raid_bdev1", 00:17:18.776 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:18.776 "strip_size_kb": 64, 00:17:18.776 "state": "online", 00:17:18.776 "raid_level": "raid5f", 00:17:18.776 "superblock": true, 00:17:18.776 "num_base_bdevs": 3, 00:17:18.776 "num_base_bdevs_discovered": 3, 00:17:18.776 "num_base_bdevs_operational": 3, 00:17:18.776 "base_bdevs_list": [ 00:17:18.776 { 00:17:18.776 "name": "spare", 00:17:18.776 "uuid": "2aba24b4-edcf-5288-8db4-26c1b67f8cb5", 00:17:18.776 "is_configured": true, 00:17:18.776 "data_offset": 2048, 00:17:18.776 "data_size": 63488 00:17:18.776 }, 00:17:18.776 { 00:17:18.776 "name": "BaseBdev2", 00:17:18.776 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:18.776 "is_configured": true, 00:17:18.776 "data_offset": 2048, 00:17:18.776 "data_size": 63488 00:17:18.776 }, 00:17:18.776 { 00:17:18.776 "name": "BaseBdev3", 00:17:18.776 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:18.776 "is_configured": true, 00:17:18.776 "data_offset": 2048, 00:17:18.776 "data_size": 63488 00:17:18.776 } 00:17:18.776 ] 00:17:18.776 }' 00:17:18.776 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.776 11:27:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.034 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:19.034 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.034 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:19.034 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:19.034 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.034 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.034 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.034 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.034 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.034 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.034 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.034 "name": "raid_bdev1", 00:17:19.034 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:19.034 "strip_size_kb": 64, 00:17:19.034 "state": "online", 00:17:19.034 "raid_level": "raid5f", 00:17:19.034 "superblock": true, 00:17:19.034 "num_base_bdevs": 3, 00:17:19.034 "num_base_bdevs_discovered": 3, 00:17:19.034 "num_base_bdevs_operational": 3, 00:17:19.034 "base_bdevs_list": [ 00:17:19.034 { 00:17:19.034 "name": "spare", 00:17:19.034 "uuid": "2aba24b4-edcf-5288-8db4-26c1b67f8cb5", 00:17:19.034 "is_configured": true, 00:17:19.034 "data_offset": 2048, 00:17:19.034 "data_size": 63488 00:17:19.034 }, 00:17:19.034 { 00:17:19.034 "name": "BaseBdev2", 00:17:19.034 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:19.034 "is_configured": true, 00:17:19.034 "data_offset": 2048, 00:17:19.034 "data_size": 63488 00:17:19.034 }, 00:17:19.034 { 00:17:19.034 "name": "BaseBdev3", 00:17:19.034 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:19.034 "is_configured": true, 00:17:19.034 "data_offset": 2048, 00:17:19.034 "data_size": 63488 00:17:19.034 } 00:17:19.034 ] 00:17:19.034 }' 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.293 [2024-11-20 11:27:02.297523] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.293 "name": "raid_bdev1", 00:17:19.293 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:19.293 "strip_size_kb": 64, 00:17:19.293 "state": "online", 00:17:19.293 "raid_level": "raid5f", 00:17:19.293 "superblock": true, 00:17:19.293 "num_base_bdevs": 3, 00:17:19.293 "num_base_bdevs_discovered": 2, 00:17:19.293 "num_base_bdevs_operational": 2, 00:17:19.293 "base_bdevs_list": [ 00:17:19.293 { 00:17:19.293 "name": null, 00:17:19.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.293 "is_configured": false, 00:17:19.293 "data_offset": 0, 00:17:19.293 "data_size": 63488 00:17:19.293 }, 00:17:19.293 { 00:17:19.293 "name": "BaseBdev2", 00:17:19.293 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:19.293 "is_configured": true, 00:17:19.293 "data_offset": 2048, 00:17:19.293 "data_size": 63488 00:17:19.293 }, 00:17:19.293 { 00:17:19.293 "name": "BaseBdev3", 00:17:19.293 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:19.293 "is_configured": true, 00:17:19.293 "data_offset": 2048, 00:17:19.293 "data_size": 63488 00:17:19.293 } 00:17:19.293 ] 00:17:19.293 }' 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.293 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.864 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:19.864 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.864 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.864 [2024-11-20 11:27:02.812674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:19.864 [2024-11-20 11:27:02.812989] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:19.864 [2024-11-20 11:27:02.813073] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:19.864 [2024-11-20 11:27:02.813147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:19.864 [2024-11-20 11:27:02.833548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:17:19.864 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.864 11:27:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:19.864 [2024-11-20 11:27:02.843406] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:20.801 11:27:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.801 11:27:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.801 11:27:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.801 11:27:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.801 11:27:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.801 11:27:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.801 11:27:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.801 11:27:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.801 11:27:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.801 11:27:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.801 11:27:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.801 "name": "raid_bdev1", 00:17:20.801 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:20.801 "strip_size_kb": 64, 00:17:20.801 "state": "online", 00:17:20.801 "raid_level": "raid5f", 00:17:20.801 "superblock": true, 00:17:20.801 "num_base_bdevs": 3, 00:17:20.801 "num_base_bdevs_discovered": 3, 00:17:20.801 "num_base_bdevs_operational": 3, 00:17:20.801 "process": { 00:17:20.801 "type": "rebuild", 00:17:20.801 "target": "spare", 00:17:20.801 "progress": { 00:17:20.801 "blocks": 18432, 00:17:20.801 "percent": 14 00:17:20.801 } 00:17:20.801 }, 00:17:20.801 "base_bdevs_list": [ 00:17:20.801 { 00:17:20.801 "name": "spare", 00:17:20.801 "uuid": "2aba24b4-edcf-5288-8db4-26c1b67f8cb5", 00:17:20.801 "is_configured": true, 00:17:20.801 "data_offset": 2048, 00:17:20.801 "data_size": 63488 00:17:20.801 }, 00:17:20.801 { 00:17:20.801 "name": "BaseBdev2", 00:17:20.801 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:20.801 "is_configured": true, 00:17:20.801 "data_offset": 2048, 00:17:20.801 "data_size": 63488 00:17:20.801 }, 00:17:20.801 { 00:17:20.801 "name": "BaseBdev3", 00:17:20.801 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:20.801 "is_configured": true, 00:17:20.801 "data_offset": 2048, 00:17:20.801 "data_size": 63488 00:17:20.801 } 00:17:20.801 ] 00:17:20.801 }' 00:17:20.801 11:27:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.061 11:27:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:21.061 11:27:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.061 11:27:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.061 11:27:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:21.061 11:27:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.061 11:27:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.061 [2024-11-20 11:27:04.004270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:21.061 [2024-11-20 11:27:04.056362] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:21.061 [2024-11-20 11:27:04.056508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.061 [2024-11-20 11:27:04.056533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:21.061 [2024-11-20 11:27:04.056546] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:21.061 11:27:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.061 11:27:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:21.061 11:27:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.061 11:27:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.061 11:27:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:21.061 11:27:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.061 11:27:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:21.061 11:27:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.061 11:27:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.061 11:27:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.061 11:27:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.061 11:27:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.061 11:27:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.061 11:27:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.061 11:27:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.061 11:27:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.061 11:27:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.061 "name": "raid_bdev1", 00:17:21.061 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:21.061 "strip_size_kb": 64, 00:17:21.061 "state": "online", 00:17:21.061 "raid_level": "raid5f", 00:17:21.061 "superblock": true, 00:17:21.061 "num_base_bdevs": 3, 00:17:21.061 "num_base_bdevs_discovered": 2, 00:17:21.061 "num_base_bdevs_operational": 2, 00:17:21.061 "base_bdevs_list": [ 00:17:21.061 { 00:17:21.061 "name": null, 00:17:21.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.061 "is_configured": false, 00:17:21.061 "data_offset": 0, 00:17:21.061 "data_size": 63488 00:17:21.061 }, 00:17:21.061 { 00:17:21.061 "name": "BaseBdev2", 00:17:21.061 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:21.061 "is_configured": true, 00:17:21.061 "data_offset": 2048, 00:17:21.061 "data_size": 63488 00:17:21.061 }, 00:17:21.061 { 00:17:21.061 "name": "BaseBdev3", 00:17:21.061 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:21.061 "is_configured": true, 00:17:21.061 "data_offset": 2048, 00:17:21.061 "data_size": 63488 00:17:21.061 } 00:17:21.061 ] 00:17:21.061 }' 00:17:21.061 11:27:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.061 11:27:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.629 11:27:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:21.629 11:27:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.629 11:27:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.629 [2024-11-20 11:27:04.605950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:21.629 [2024-11-20 11:27:04.606106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.629 [2024-11-20 11:27:04.606161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:17:21.629 [2024-11-20 11:27:04.606209] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.630 [2024-11-20 11:27:04.606837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.630 [2024-11-20 11:27:04.606867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:21.630 [2024-11-20 11:27:04.606981] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:21.630 [2024-11-20 11:27:04.607002] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:21.630 [2024-11-20 11:27:04.607014] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:21.630 [2024-11-20 11:27:04.607041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:21.630 [2024-11-20 11:27:04.626356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:17:21.630 spare 00:17:21.630 11:27:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.630 11:27:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:21.630 [2024-11-20 11:27:04.635584] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:22.566 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.566 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.566 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.566 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.566 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.566 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.566 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.566 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.566 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.566 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.826 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.826 "name": "raid_bdev1", 00:17:22.826 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:22.826 "strip_size_kb": 64, 00:17:22.826 "state": "online", 00:17:22.826 "raid_level": "raid5f", 00:17:22.826 "superblock": true, 00:17:22.826 "num_base_bdevs": 3, 00:17:22.826 "num_base_bdevs_discovered": 3, 00:17:22.826 "num_base_bdevs_operational": 3, 00:17:22.826 "process": { 00:17:22.826 "type": "rebuild", 00:17:22.826 "target": "spare", 00:17:22.826 "progress": { 00:17:22.826 "blocks": 18432, 00:17:22.826 "percent": 14 00:17:22.826 } 00:17:22.826 }, 00:17:22.826 "base_bdevs_list": [ 00:17:22.826 { 00:17:22.826 "name": "spare", 00:17:22.826 "uuid": "2aba24b4-edcf-5288-8db4-26c1b67f8cb5", 00:17:22.826 "is_configured": true, 00:17:22.826 "data_offset": 2048, 00:17:22.826 "data_size": 63488 00:17:22.826 }, 00:17:22.826 { 00:17:22.826 "name": "BaseBdev2", 00:17:22.826 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:22.826 "is_configured": true, 00:17:22.826 "data_offset": 2048, 00:17:22.826 "data_size": 63488 00:17:22.826 }, 00:17:22.826 { 00:17:22.826 "name": "BaseBdev3", 00:17:22.826 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:22.826 "is_configured": true, 00:17:22.826 "data_offset": 2048, 00:17:22.826 "data_size": 63488 00:17:22.826 } 00:17:22.826 ] 00:17:22.826 }' 00:17:22.826 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.826 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:22.826 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.826 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.826 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:22.826 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.826 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.826 [2024-11-20 11:27:05.788138] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:22.826 [2024-11-20 11:27:05.848299] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:22.826 [2024-11-20 11:27:05.848519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.826 [2024-11-20 11:27:05.848550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:22.826 [2024-11-20 11:27:05.848561] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:22.826 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.826 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:22.826 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.826 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.826 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:22.826 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.826 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:22.826 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.826 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.826 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.826 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.826 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.826 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.826 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.826 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.826 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.086 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.086 "name": "raid_bdev1", 00:17:23.086 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:23.086 "strip_size_kb": 64, 00:17:23.086 "state": "online", 00:17:23.086 "raid_level": "raid5f", 00:17:23.086 "superblock": true, 00:17:23.086 "num_base_bdevs": 3, 00:17:23.086 "num_base_bdevs_discovered": 2, 00:17:23.086 "num_base_bdevs_operational": 2, 00:17:23.086 "base_bdevs_list": [ 00:17:23.086 { 00:17:23.086 "name": null, 00:17:23.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.086 "is_configured": false, 00:17:23.086 "data_offset": 0, 00:17:23.086 "data_size": 63488 00:17:23.086 }, 00:17:23.086 { 00:17:23.086 "name": "BaseBdev2", 00:17:23.086 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:23.086 "is_configured": true, 00:17:23.086 "data_offset": 2048, 00:17:23.086 "data_size": 63488 00:17:23.086 }, 00:17:23.086 { 00:17:23.086 "name": "BaseBdev3", 00:17:23.086 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:23.086 "is_configured": true, 00:17:23.086 "data_offset": 2048, 00:17:23.086 "data_size": 63488 00:17:23.086 } 00:17:23.086 ] 00:17:23.086 }' 00:17:23.086 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.086 11:27:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.346 11:27:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:23.346 11:27:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.346 11:27:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:23.346 11:27:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:23.346 11:27:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.346 11:27:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.346 11:27:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.346 11:27:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.346 11:27:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.346 11:27:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.346 11:27:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.346 "name": "raid_bdev1", 00:17:23.346 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:23.346 "strip_size_kb": 64, 00:17:23.346 "state": "online", 00:17:23.346 "raid_level": "raid5f", 00:17:23.346 "superblock": true, 00:17:23.346 "num_base_bdevs": 3, 00:17:23.346 "num_base_bdevs_discovered": 2, 00:17:23.346 "num_base_bdevs_operational": 2, 00:17:23.346 "base_bdevs_list": [ 00:17:23.346 { 00:17:23.346 "name": null, 00:17:23.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.346 "is_configured": false, 00:17:23.346 "data_offset": 0, 00:17:23.346 "data_size": 63488 00:17:23.346 }, 00:17:23.346 { 00:17:23.346 "name": "BaseBdev2", 00:17:23.346 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:23.346 "is_configured": true, 00:17:23.346 "data_offset": 2048, 00:17:23.346 "data_size": 63488 00:17:23.346 }, 00:17:23.346 { 00:17:23.346 "name": "BaseBdev3", 00:17:23.346 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:23.346 "is_configured": true, 00:17:23.346 "data_offset": 2048, 00:17:23.346 "data_size": 63488 00:17:23.346 } 00:17:23.346 ] 00:17:23.346 }' 00:17:23.346 11:27:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.606 11:27:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:23.606 11:27:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.606 11:27:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:23.606 11:27:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:23.606 11:27:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.606 11:27:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.606 11:27:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.606 11:27:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:23.606 11:27:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.606 11:27:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.606 [2024-11-20 11:27:06.551846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:23.606 [2024-11-20 11:27:06.551973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.606 [2024-11-20 11:27:06.552042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:23.606 [2024-11-20 11:27:06.552091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.606 [2024-11-20 11:27:06.552688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.606 [2024-11-20 11:27:06.552715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:23.606 [2024-11-20 11:27:06.552826] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:23.606 [2024-11-20 11:27:06.552847] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:23.606 [2024-11-20 11:27:06.552873] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:23.606 [2024-11-20 11:27:06.552886] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:23.606 BaseBdev1 00:17:23.606 11:27:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.606 11:27:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:24.541 11:27:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:24.541 11:27:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.541 11:27:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.541 11:27:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.541 11:27:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.541 11:27:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.541 11:27:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.541 11:27:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.541 11:27:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.541 11:27:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.541 11:27:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.541 11:27:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.541 11:27:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.541 11:27:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.541 11:27:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.541 11:27:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.541 "name": "raid_bdev1", 00:17:24.541 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:24.541 "strip_size_kb": 64, 00:17:24.541 "state": "online", 00:17:24.541 "raid_level": "raid5f", 00:17:24.541 "superblock": true, 00:17:24.541 "num_base_bdevs": 3, 00:17:24.541 "num_base_bdevs_discovered": 2, 00:17:24.541 "num_base_bdevs_operational": 2, 00:17:24.541 "base_bdevs_list": [ 00:17:24.541 { 00:17:24.541 "name": null, 00:17:24.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.541 "is_configured": false, 00:17:24.541 "data_offset": 0, 00:17:24.541 "data_size": 63488 00:17:24.541 }, 00:17:24.541 { 00:17:24.541 "name": "BaseBdev2", 00:17:24.541 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:24.541 "is_configured": true, 00:17:24.541 "data_offset": 2048, 00:17:24.541 "data_size": 63488 00:17:24.541 }, 00:17:24.541 { 00:17:24.541 "name": "BaseBdev3", 00:17:24.541 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:24.541 "is_configured": true, 00:17:24.541 "data_offset": 2048, 00:17:24.541 "data_size": 63488 00:17:24.541 } 00:17:24.541 ] 00:17:24.541 }' 00:17:24.541 11:27:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.541 11:27:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.110 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.111 "name": "raid_bdev1", 00:17:25.111 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:25.111 "strip_size_kb": 64, 00:17:25.111 "state": "online", 00:17:25.111 "raid_level": "raid5f", 00:17:25.111 "superblock": true, 00:17:25.111 "num_base_bdevs": 3, 00:17:25.111 "num_base_bdevs_discovered": 2, 00:17:25.111 "num_base_bdevs_operational": 2, 00:17:25.111 "base_bdevs_list": [ 00:17:25.111 { 00:17:25.111 "name": null, 00:17:25.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.111 "is_configured": false, 00:17:25.111 "data_offset": 0, 00:17:25.111 "data_size": 63488 00:17:25.111 }, 00:17:25.111 { 00:17:25.111 "name": "BaseBdev2", 00:17:25.111 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:25.111 "is_configured": true, 00:17:25.111 "data_offset": 2048, 00:17:25.111 "data_size": 63488 00:17:25.111 }, 00:17:25.111 { 00:17:25.111 "name": "BaseBdev3", 00:17:25.111 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:25.111 "is_configured": true, 00:17:25.111 "data_offset": 2048, 00:17:25.111 "data_size": 63488 00:17:25.111 } 00:17:25.111 ] 00:17:25.111 }' 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.111 [2024-11-20 11:27:08.189943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:25.111 [2024-11-20 11:27:08.190194] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:25.111 [2024-11-20 11:27:08.190275] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:25.111 request: 00:17:25.111 { 00:17:25.111 "base_bdev": "BaseBdev1", 00:17:25.111 "raid_bdev": "raid_bdev1", 00:17:25.111 "method": "bdev_raid_add_base_bdev", 00:17:25.111 "req_id": 1 00:17:25.111 } 00:17:25.111 Got JSON-RPC error response 00:17:25.111 response: 00:17:25.111 { 00:17:25.111 "code": -22, 00:17:25.111 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:25.111 } 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:25.111 11:27:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:26.492 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:26.492 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.492 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.492 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.492 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.492 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:26.492 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.492 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.492 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.492 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.492 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.492 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.492 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.492 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.492 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.492 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.492 "name": "raid_bdev1", 00:17:26.492 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:26.492 "strip_size_kb": 64, 00:17:26.492 "state": "online", 00:17:26.492 "raid_level": "raid5f", 00:17:26.492 "superblock": true, 00:17:26.492 "num_base_bdevs": 3, 00:17:26.492 "num_base_bdevs_discovered": 2, 00:17:26.492 "num_base_bdevs_operational": 2, 00:17:26.492 "base_bdevs_list": [ 00:17:26.492 { 00:17:26.492 "name": null, 00:17:26.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.492 "is_configured": false, 00:17:26.492 "data_offset": 0, 00:17:26.492 "data_size": 63488 00:17:26.492 }, 00:17:26.492 { 00:17:26.492 "name": "BaseBdev2", 00:17:26.492 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:26.492 "is_configured": true, 00:17:26.492 "data_offset": 2048, 00:17:26.492 "data_size": 63488 00:17:26.492 }, 00:17:26.492 { 00:17:26.492 "name": "BaseBdev3", 00:17:26.492 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:26.492 "is_configured": true, 00:17:26.492 "data_offset": 2048, 00:17:26.492 "data_size": 63488 00:17:26.492 } 00:17:26.492 ] 00:17:26.492 }' 00:17:26.492 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.492 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.750 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:26.750 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.750 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:26.750 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:26.750 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.750 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.750 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.750 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.750 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.750 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.750 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.750 "name": "raid_bdev1", 00:17:26.750 "uuid": "baa8b4c8-bcf2-4c94-a107-5fc6451a6345", 00:17:26.750 "strip_size_kb": 64, 00:17:26.750 "state": "online", 00:17:26.750 "raid_level": "raid5f", 00:17:26.750 "superblock": true, 00:17:26.750 "num_base_bdevs": 3, 00:17:26.750 "num_base_bdevs_discovered": 2, 00:17:26.750 "num_base_bdevs_operational": 2, 00:17:26.750 "base_bdevs_list": [ 00:17:26.750 { 00:17:26.750 "name": null, 00:17:26.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.750 "is_configured": false, 00:17:26.751 "data_offset": 0, 00:17:26.751 "data_size": 63488 00:17:26.751 }, 00:17:26.751 { 00:17:26.751 "name": "BaseBdev2", 00:17:26.751 "uuid": "43e876c6-979b-5ed9-a030-d48ac0ff5847", 00:17:26.751 "is_configured": true, 00:17:26.751 "data_offset": 2048, 00:17:26.751 "data_size": 63488 00:17:26.751 }, 00:17:26.751 { 00:17:26.751 "name": "BaseBdev3", 00:17:26.751 "uuid": "33b3525c-3242-5d6d-bb27-22812a26e09f", 00:17:26.751 "is_configured": true, 00:17:26.751 "data_offset": 2048, 00:17:26.751 "data_size": 63488 00:17:26.751 } 00:17:26.751 ] 00:17:26.751 }' 00:17:26.751 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.751 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:26.751 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.751 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:26.751 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82212 00:17:26.751 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82212 ']' 00:17:26.751 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82212 00:17:26.751 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:26.751 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.751 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82212 00:17:26.751 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:26.751 killing process with pid 82212 00:17:26.751 Received shutdown signal, test time was about 60.000000 seconds 00:17:26.751 00:17:26.751 Latency(us) 00:17:26.751 [2024-11-20T11:27:09.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.751 [2024-11-20T11:27:09.867Z] =================================================================================================================== 00:17:26.751 [2024-11-20T11:27:09.867Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:26.751 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:26.751 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82212' 00:17:26.751 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82212 00:17:26.751 [2024-11-20 11:27:09.855742] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:26.751 11:27:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82212 00:17:26.751 [2024-11-20 11:27:09.855892] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:26.751 [2024-11-20 11:27:09.855971] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:26.751 [2024-11-20 11:27:09.856011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:27.354 [2024-11-20 11:27:10.330488] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:28.735 11:27:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:28.735 00:17:28.735 real 0m24.404s 00:17:28.735 user 0m31.468s 00:17:28.735 sys 0m2.927s 00:17:28.735 11:27:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.735 11:27:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.735 ************************************ 00:17:28.735 END TEST raid5f_rebuild_test_sb 00:17:28.735 ************************************ 00:17:28.735 11:27:11 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:17:28.735 11:27:11 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:17:28.735 11:27:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:28.735 11:27:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:28.735 11:27:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:28.735 ************************************ 00:17:28.735 START TEST raid5f_state_function_test 00:17:28.735 ************************************ 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82981 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:28.735 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82981' 00:17:28.735 Process raid pid: 82981 00:17:28.736 11:27:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82981 00:17:28.736 11:27:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82981 ']' 00:17:28.736 11:27:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.736 11:27:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:28.736 11:27:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.736 11:27:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:28.736 11:27:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.736 [2024-11-20 11:27:11.804093] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:17:28.736 [2024-11-20 11:27:11.804324] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.995 [2024-11-20 11:27:11.986214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.255 [2024-11-20 11:27:12.116315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.255 [2024-11-20 11:27:12.342413] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:29.255 [2024-11-20 11:27:12.342568] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:29.825 11:27:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:29.825 11:27:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:17:29.825 11:27:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:29.825 11:27:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.825 11:27:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.825 [2024-11-20 11:27:12.732056] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:29.825 [2024-11-20 11:27:12.732177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:29.825 [2024-11-20 11:27:12.732214] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:29.825 [2024-11-20 11:27:12.732242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:29.825 [2024-11-20 11:27:12.732286] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:29.825 [2024-11-20 11:27:12.732312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:29.825 [2024-11-20 11:27:12.732344] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:29.825 [2024-11-20 11:27:12.732369] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:29.825 11:27:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.825 11:27:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:29.825 11:27:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.825 11:27:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.825 11:27:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:29.825 11:27:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.825 11:27:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:29.825 11:27:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.825 11:27:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.825 11:27:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.825 11:27:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.825 11:27:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.826 11:27:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.826 11:27:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.826 11:27:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.826 11:27:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.826 11:27:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.826 "name": "Existed_Raid", 00:17:29.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.826 "strip_size_kb": 64, 00:17:29.826 "state": "configuring", 00:17:29.826 "raid_level": "raid5f", 00:17:29.826 "superblock": false, 00:17:29.826 "num_base_bdevs": 4, 00:17:29.826 "num_base_bdevs_discovered": 0, 00:17:29.826 "num_base_bdevs_operational": 4, 00:17:29.826 "base_bdevs_list": [ 00:17:29.826 { 00:17:29.826 "name": "BaseBdev1", 00:17:29.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.826 "is_configured": false, 00:17:29.826 "data_offset": 0, 00:17:29.826 "data_size": 0 00:17:29.826 }, 00:17:29.826 { 00:17:29.826 "name": "BaseBdev2", 00:17:29.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.826 "is_configured": false, 00:17:29.826 "data_offset": 0, 00:17:29.826 "data_size": 0 00:17:29.826 }, 00:17:29.826 { 00:17:29.826 "name": "BaseBdev3", 00:17:29.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.826 "is_configured": false, 00:17:29.826 "data_offset": 0, 00:17:29.826 "data_size": 0 00:17:29.826 }, 00:17:29.826 { 00:17:29.826 "name": "BaseBdev4", 00:17:29.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.826 "is_configured": false, 00:17:29.826 "data_offset": 0, 00:17:29.826 "data_size": 0 00:17:29.826 } 00:17:29.826 ] 00:17:29.826 }' 00:17:29.826 11:27:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.826 11:27:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.394 [2024-11-20 11:27:13.235148] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:30.394 [2024-11-20 11:27:13.235274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.394 [2024-11-20 11:27:13.247129] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:30.394 [2024-11-20 11:27:13.247220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:30.394 [2024-11-20 11:27:13.247249] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:30.394 [2024-11-20 11:27:13.247273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:30.394 [2024-11-20 11:27:13.247292] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:30.394 [2024-11-20 11:27:13.247314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:30.394 [2024-11-20 11:27:13.247332] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:30.394 [2024-11-20 11:27:13.247353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.394 [2024-11-20 11:27:13.296032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.394 BaseBdev1 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.394 [ 00:17:30.394 { 00:17:30.394 "name": "BaseBdev1", 00:17:30.394 "aliases": [ 00:17:30.394 "0fe8a1c6-df6a-45a6-ae2a-bb3437843629" 00:17:30.394 ], 00:17:30.394 "product_name": "Malloc disk", 00:17:30.394 "block_size": 512, 00:17:30.394 "num_blocks": 65536, 00:17:30.394 "uuid": "0fe8a1c6-df6a-45a6-ae2a-bb3437843629", 00:17:30.394 "assigned_rate_limits": { 00:17:30.394 "rw_ios_per_sec": 0, 00:17:30.394 "rw_mbytes_per_sec": 0, 00:17:30.394 "r_mbytes_per_sec": 0, 00:17:30.394 "w_mbytes_per_sec": 0 00:17:30.394 }, 00:17:30.394 "claimed": true, 00:17:30.394 "claim_type": "exclusive_write", 00:17:30.394 "zoned": false, 00:17:30.394 "supported_io_types": { 00:17:30.394 "read": true, 00:17:30.394 "write": true, 00:17:30.394 "unmap": true, 00:17:30.394 "flush": true, 00:17:30.394 "reset": true, 00:17:30.394 "nvme_admin": false, 00:17:30.394 "nvme_io": false, 00:17:30.394 "nvme_io_md": false, 00:17:30.394 "write_zeroes": true, 00:17:30.394 "zcopy": true, 00:17:30.394 "get_zone_info": false, 00:17:30.394 "zone_management": false, 00:17:30.394 "zone_append": false, 00:17:30.394 "compare": false, 00:17:30.394 "compare_and_write": false, 00:17:30.394 "abort": true, 00:17:30.394 "seek_hole": false, 00:17:30.394 "seek_data": false, 00:17:30.394 "copy": true, 00:17:30.394 "nvme_iov_md": false 00:17:30.394 }, 00:17:30.394 "memory_domains": [ 00:17:30.394 { 00:17:30.394 "dma_device_id": "system", 00:17:30.394 "dma_device_type": 1 00:17:30.394 }, 00:17:30.394 { 00:17:30.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.394 "dma_device_type": 2 00:17:30.394 } 00:17:30.394 ], 00:17:30.394 "driver_specific": {} 00:17:30.394 } 00:17:30.394 ] 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.394 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.394 "name": "Existed_Raid", 00:17:30.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.394 "strip_size_kb": 64, 00:17:30.394 "state": "configuring", 00:17:30.394 "raid_level": "raid5f", 00:17:30.394 "superblock": false, 00:17:30.394 "num_base_bdevs": 4, 00:17:30.394 "num_base_bdevs_discovered": 1, 00:17:30.394 "num_base_bdevs_operational": 4, 00:17:30.394 "base_bdevs_list": [ 00:17:30.394 { 00:17:30.394 "name": "BaseBdev1", 00:17:30.394 "uuid": "0fe8a1c6-df6a-45a6-ae2a-bb3437843629", 00:17:30.394 "is_configured": true, 00:17:30.394 "data_offset": 0, 00:17:30.394 "data_size": 65536 00:17:30.394 }, 00:17:30.394 { 00:17:30.394 "name": "BaseBdev2", 00:17:30.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.395 "is_configured": false, 00:17:30.395 "data_offset": 0, 00:17:30.395 "data_size": 0 00:17:30.395 }, 00:17:30.395 { 00:17:30.395 "name": "BaseBdev3", 00:17:30.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.395 "is_configured": false, 00:17:30.395 "data_offset": 0, 00:17:30.395 "data_size": 0 00:17:30.395 }, 00:17:30.395 { 00:17:30.395 "name": "BaseBdev4", 00:17:30.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.395 "is_configured": false, 00:17:30.395 "data_offset": 0, 00:17:30.395 "data_size": 0 00:17:30.395 } 00:17:30.395 ] 00:17:30.395 }' 00:17:30.395 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.395 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.654 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:30.654 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.654 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.914 [2024-11-20 11:27:13.767490] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:30.914 [2024-11-20 11:27:13.767564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:30.914 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.914 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:30.914 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.914 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.914 [2024-11-20 11:27:13.775532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.914 [2024-11-20 11:27:13.777587] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:30.914 [2024-11-20 11:27:13.777699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:30.914 [2024-11-20 11:27:13.777718] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:30.914 [2024-11-20 11:27:13.777733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:30.914 [2024-11-20 11:27:13.777743] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:30.914 [2024-11-20 11:27:13.777755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:30.914 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.914 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:30.914 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:30.914 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:30.914 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.914 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.914 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:30.914 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.914 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:30.914 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.914 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.914 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.914 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.914 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.914 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.914 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.914 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.914 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.914 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.914 "name": "Existed_Raid", 00:17:30.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.914 "strip_size_kb": 64, 00:17:30.914 "state": "configuring", 00:17:30.914 "raid_level": "raid5f", 00:17:30.914 "superblock": false, 00:17:30.914 "num_base_bdevs": 4, 00:17:30.914 "num_base_bdevs_discovered": 1, 00:17:30.914 "num_base_bdevs_operational": 4, 00:17:30.914 "base_bdevs_list": [ 00:17:30.914 { 00:17:30.914 "name": "BaseBdev1", 00:17:30.914 "uuid": "0fe8a1c6-df6a-45a6-ae2a-bb3437843629", 00:17:30.914 "is_configured": true, 00:17:30.914 "data_offset": 0, 00:17:30.914 "data_size": 65536 00:17:30.914 }, 00:17:30.914 { 00:17:30.914 "name": "BaseBdev2", 00:17:30.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.914 "is_configured": false, 00:17:30.914 "data_offset": 0, 00:17:30.914 "data_size": 0 00:17:30.914 }, 00:17:30.914 { 00:17:30.914 "name": "BaseBdev3", 00:17:30.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.914 "is_configured": false, 00:17:30.914 "data_offset": 0, 00:17:30.914 "data_size": 0 00:17:30.914 }, 00:17:30.914 { 00:17:30.914 "name": "BaseBdev4", 00:17:30.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.914 "is_configured": false, 00:17:30.914 "data_offset": 0, 00:17:30.914 "data_size": 0 00:17:30.914 } 00:17:30.914 ] 00:17:30.914 }' 00:17:30.914 11:27:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.914 11:27:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.482 [2024-11-20 11:27:14.337737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:31.482 BaseBdev2 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.482 [ 00:17:31.482 { 00:17:31.482 "name": "BaseBdev2", 00:17:31.482 "aliases": [ 00:17:31.482 "1337d551-c6bf-49b3-bb5d-752290e6ac32" 00:17:31.482 ], 00:17:31.482 "product_name": "Malloc disk", 00:17:31.482 "block_size": 512, 00:17:31.482 "num_blocks": 65536, 00:17:31.482 "uuid": "1337d551-c6bf-49b3-bb5d-752290e6ac32", 00:17:31.482 "assigned_rate_limits": { 00:17:31.482 "rw_ios_per_sec": 0, 00:17:31.482 "rw_mbytes_per_sec": 0, 00:17:31.482 "r_mbytes_per_sec": 0, 00:17:31.482 "w_mbytes_per_sec": 0 00:17:31.482 }, 00:17:31.482 "claimed": true, 00:17:31.482 "claim_type": "exclusive_write", 00:17:31.482 "zoned": false, 00:17:31.482 "supported_io_types": { 00:17:31.482 "read": true, 00:17:31.482 "write": true, 00:17:31.482 "unmap": true, 00:17:31.482 "flush": true, 00:17:31.482 "reset": true, 00:17:31.482 "nvme_admin": false, 00:17:31.482 "nvme_io": false, 00:17:31.482 "nvme_io_md": false, 00:17:31.482 "write_zeroes": true, 00:17:31.482 "zcopy": true, 00:17:31.482 "get_zone_info": false, 00:17:31.482 "zone_management": false, 00:17:31.482 "zone_append": false, 00:17:31.482 "compare": false, 00:17:31.482 "compare_and_write": false, 00:17:31.482 "abort": true, 00:17:31.482 "seek_hole": false, 00:17:31.482 "seek_data": false, 00:17:31.482 "copy": true, 00:17:31.482 "nvme_iov_md": false 00:17:31.482 }, 00:17:31.482 "memory_domains": [ 00:17:31.482 { 00:17:31.482 "dma_device_id": "system", 00:17:31.482 "dma_device_type": 1 00:17:31.482 }, 00:17:31.482 { 00:17:31.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.482 "dma_device_type": 2 00:17:31.482 } 00:17:31.482 ], 00:17:31.482 "driver_specific": {} 00:17:31.482 } 00:17:31.482 ] 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.482 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.483 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.483 "name": "Existed_Raid", 00:17:31.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.483 "strip_size_kb": 64, 00:17:31.483 "state": "configuring", 00:17:31.483 "raid_level": "raid5f", 00:17:31.483 "superblock": false, 00:17:31.483 "num_base_bdevs": 4, 00:17:31.483 "num_base_bdevs_discovered": 2, 00:17:31.483 "num_base_bdevs_operational": 4, 00:17:31.483 "base_bdevs_list": [ 00:17:31.483 { 00:17:31.483 "name": "BaseBdev1", 00:17:31.483 "uuid": "0fe8a1c6-df6a-45a6-ae2a-bb3437843629", 00:17:31.483 "is_configured": true, 00:17:31.483 "data_offset": 0, 00:17:31.483 "data_size": 65536 00:17:31.483 }, 00:17:31.483 { 00:17:31.483 "name": "BaseBdev2", 00:17:31.483 "uuid": "1337d551-c6bf-49b3-bb5d-752290e6ac32", 00:17:31.483 "is_configured": true, 00:17:31.483 "data_offset": 0, 00:17:31.483 "data_size": 65536 00:17:31.483 }, 00:17:31.483 { 00:17:31.483 "name": "BaseBdev3", 00:17:31.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.483 "is_configured": false, 00:17:31.483 "data_offset": 0, 00:17:31.483 "data_size": 0 00:17:31.483 }, 00:17:31.483 { 00:17:31.483 "name": "BaseBdev4", 00:17:31.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.483 "is_configured": false, 00:17:31.483 "data_offset": 0, 00:17:31.483 "data_size": 0 00:17:31.483 } 00:17:31.483 ] 00:17:31.483 }' 00:17:31.483 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.483 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.746 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:31.746 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.746 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.746 [2024-11-20 11:27:14.822091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:31.746 BaseBdev3 00:17:31.746 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.746 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:31.746 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:31.746 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:31.746 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:31.746 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:31.746 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:31.746 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:31.746 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.746 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.746 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.746 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:31.746 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.747 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.747 [ 00:17:31.747 { 00:17:31.747 "name": "BaseBdev3", 00:17:31.747 "aliases": [ 00:17:31.747 "14c64c60-bfbe-4642-a08e-7b62c00ad930" 00:17:31.747 ], 00:17:31.747 "product_name": "Malloc disk", 00:17:31.747 "block_size": 512, 00:17:31.747 "num_blocks": 65536, 00:17:31.747 "uuid": "14c64c60-bfbe-4642-a08e-7b62c00ad930", 00:17:31.747 "assigned_rate_limits": { 00:17:31.747 "rw_ios_per_sec": 0, 00:17:31.747 "rw_mbytes_per_sec": 0, 00:17:31.747 "r_mbytes_per_sec": 0, 00:17:31.747 "w_mbytes_per_sec": 0 00:17:31.747 }, 00:17:31.747 "claimed": true, 00:17:31.747 "claim_type": "exclusive_write", 00:17:31.747 "zoned": false, 00:17:31.747 "supported_io_types": { 00:17:31.747 "read": true, 00:17:31.747 "write": true, 00:17:31.747 "unmap": true, 00:17:31.747 "flush": true, 00:17:31.747 "reset": true, 00:17:31.747 "nvme_admin": false, 00:17:31.747 "nvme_io": false, 00:17:31.747 "nvme_io_md": false, 00:17:31.747 "write_zeroes": true, 00:17:31.747 "zcopy": true, 00:17:31.747 "get_zone_info": false, 00:17:31.747 "zone_management": false, 00:17:31.747 "zone_append": false, 00:17:31.747 "compare": false, 00:17:31.747 "compare_and_write": false, 00:17:31.747 "abort": true, 00:17:31.747 "seek_hole": false, 00:17:31.747 "seek_data": false, 00:17:31.747 "copy": true, 00:17:31.747 "nvme_iov_md": false 00:17:31.747 }, 00:17:31.747 "memory_domains": [ 00:17:31.747 { 00:17:31.747 "dma_device_id": "system", 00:17:32.006 "dma_device_type": 1 00:17:32.006 }, 00:17:32.006 { 00:17:32.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.006 "dma_device_type": 2 00:17:32.006 } 00:17:32.006 ], 00:17:32.006 "driver_specific": {} 00:17:32.006 } 00:17:32.006 ] 00:17:32.006 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.006 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:32.006 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:32.006 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:32.006 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:32.006 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:32.006 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:32.006 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:32.006 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.006 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:32.006 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.006 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.006 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.006 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.006 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.006 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.006 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.006 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.006 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.006 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.006 "name": "Existed_Raid", 00:17:32.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.006 "strip_size_kb": 64, 00:17:32.006 "state": "configuring", 00:17:32.006 "raid_level": "raid5f", 00:17:32.006 "superblock": false, 00:17:32.006 "num_base_bdevs": 4, 00:17:32.006 "num_base_bdevs_discovered": 3, 00:17:32.006 "num_base_bdevs_operational": 4, 00:17:32.006 "base_bdevs_list": [ 00:17:32.006 { 00:17:32.006 "name": "BaseBdev1", 00:17:32.006 "uuid": "0fe8a1c6-df6a-45a6-ae2a-bb3437843629", 00:17:32.006 "is_configured": true, 00:17:32.006 "data_offset": 0, 00:17:32.006 "data_size": 65536 00:17:32.006 }, 00:17:32.006 { 00:17:32.006 "name": "BaseBdev2", 00:17:32.006 "uuid": "1337d551-c6bf-49b3-bb5d-752290e6ac32", 00:17:32.006 "is_configured": true, 00:17:32.006 "data_offset": 0, 00:17:32.006 "data_size": 65536 00:17:32.006 }, 00:17:32.006 { 00:17:32.006 "name": "BaseBdev3", 00:17:32.006 "uuid": "14c64c60-bfbe-4642-a08e-7b62c00ad930", 00:17:32.006 "is_configured": true, 00:17:32.006 "data_offset": 0, 00:17:32.006 "data_size": 65536 00:17:32.006 }, 00:17:32.006 { 00:17:32.006 "name": "BaseBdev4", 00:17:32.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.006 "is_configured": false, 00:17:32.006 "data_offset": 0, 00:17:32.006 "data_size": 0 00:17:32.006 } 00:17:32.006 ] 00:17:32.006 }' 00:17:32.006 11:27:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.006 11:27:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.265 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:32.265 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.265 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.265 [2024-11-20 11:27:15.363790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:32.265 [2024-11-20 11:27:15.363965] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:32.265 [2024-11-20 11:27:15.364001] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:32.265 [2024-11-20 11:27:15.364338] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:32.265 [2024-11-20 11:27:15.373056] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:32.265 [2024-11-20 11:27:15.373134] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:32.265 [2024-11-20 11:27:15.373445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.265 BaseBdev4 00:17:32.265 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.265 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:32.265 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:32.265 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:32.265 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:32.265 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:32.265 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:32.265 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:32.265 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.265 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.523 [ 00:17:32.523 { 00:17:32.523 "name": "BaseBdev4", 00:17:32.523 "aliases": [ 00:17:32.523 "ae6af998-a2cb-426e-b374-5ab05a091049" 00:17:32.523 ], 00:17:32.523 "product_name": "Malloc disk", 00:17:32.523 "block_size": 512, 00:17:32.523 "num_blocks": 65536, 00:17:32.523 "uuid": "ae6af998-a2cb-426e-b374-5ab05a091049", 00:17:32.523 "assigned_rate_limits": { 00:17:32.523 "rw_ios_per_sec": 0, 00:17:32.523 "rw_mbytes_per_sec": 0, 00:17:32.523 "r_mbytes_per_sec": 0, 00:17:32.523 "w_mbytes_per_sec": 0 00:17:32.523 }, 00:17:32.523 "claimed": true, 00:17:32.523 "claim_type": "exclusive_write", 00:17:32.523 "zoned": false, 00:17:32.523 "supported_io_types": { 00:17:32.523 "read": true, 00:17:32.523 "write": true, 00:17:32.523 "unmap": true, 00:17:32.523 "flush": true, 00:17:32.523 "reset": true, 00:17:32.523 "nvme_admin": false, 00:17:32.523 "nvme_io": false, 00:17:32.523 "nvme_io_md": false, 00:17:32.523 "write_zeroes": true, 00:17:32.523 "zcopy": true, 00:17:32.523 "get_zone_info": false, 00:17:32.523 "zone_management": false, 00:17:32.523 "zone_append": false, 00:17:32.523 "compare": false, 00:17:32.523 "compare_and_write": false, 00:17:32.523 "abort": true, 00:17:32.523 "seek_hole": false, 00:17:32.523 "seek_data": false, 00:17:32.523 "copy": true, 00:17:32.523 "nvme_iov_md": false 00:17:32.523 }, 00:17:32.523 "memory_domains": [ 00:17:32.523 { 00:17:32.523 "dma_device_id": "system", 00:17:32.523 "dma_device_type": 1 00:17:32.523 }, 00:17:32.523 { 00:17:32.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.523 "dma_device_type": 2 00:17:32.523 } 00:17:32.523 ], 00:17:32.523 "driver_specific": {} 00:17:32.523 } 00:17:32.523 ] 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.523 "name": "Existed_Raid", 00:17:32.523 "uuid": "25ee7d4d-c1ba-4ee2-a025-03daeaf7919b", 00:17:32.523 "strip_size_kb": 64, 00:17:32.523 "state": "online", 00:17:32.523 "raid_level": "raid5f", 00:17:32.523 "superblock": false, 00:17:32.523 "num_base_bdevs": 4, 00:17:32.523 "num_base_bdevs_discovered": 4, 00:17:32.523 "num_base_bdevs_operational": 4, 00:17:32.523 "base_bdevs_list": [ 00:17:32.523 { 00:17:32.523 "name": "BaseBdev1", 00:17:32.523 "uuid": "0fe8a1c6-df6a-45a6-ae2a-bb3437843629", 00:17:32.523 "is_configured": true, 00:17:32.523 "data_offset": 0, 00:17:32.523 "data_size": 65536 00:17:32.523 }, 00:17:32.523 { 00:17:32.523 "name": "BaseBdev2", 00:17:32.523 "uuid": "1337d551-c6bf-49b3-bb5d-752290e6ac32", 00:17:32.523 "is_configured": true, 00:17:32.523 "data_offset": 0, 00:17:32.523 "data_size": 65536 00:17:32.523 }, 00:17:32.523 { 00:17:32.523 "name": "BaseBdev3", 00:17:32.523 "uuid": "14c64c60-bfbe-4642-a08e-7b62c00ad930", 00:17:32.523 "is_configured": true, 00:17:32.523 "data_offset": 0, 00:17:32.523 "data_size": 65536 00:17:32.523 }, 00:17:32.523 { 00:17:32.523 "name": "BaseBdev4", 00:17:32.523 "uuid": "ae6af998-a2cb-426e-b374-5ab05a091049", 00:17:32.523 "is_configured": true, 00:17:32.523 "data_offset": 0, 00:17:32.523 "data_size": 65536 00:17:32.523 } 00:17:32.523 ] 00:17:32.523 }' 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.523 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.781 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:32.781 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:32.782 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:32.782 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:32.782 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:32.782 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:32.782 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:32.782 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.782 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.782 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:32.782 [2024-11-20 11:27:15.853543] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:32.782 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.782 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:32.782 "name": "Existed_Raid", 00:17:32.782 "aliases": [ 00:17:32.782 "25ee7d4d-c1ba-4ee2-a025-03daeaf7919b" 00:17:32.782 ], 00:17:32.782 "product_name": "Raid Volume", 00:17:32.782 "block_size": 512, 00:17:32.782 "num_blocks": 196608, 00:17:32.782 "uuid": "25ee7d4d-c1ba-4ee2-a025-03daeaf7919b", 00:17:32.782 "assigned_rate_limits": { 00:17:32.782 "rw_ios_per_sec": 0, 00:17:32.782 "rw_mbytes_per_sec": 0, 00:17:32.782 "r_mbytes_per_sec": 0, 00:17:32.782 "w_mbytes_per_sec": 0 00:17:32.782 }, 00:17:32.782 "claimed": false, 00:17:32.782 "zoned": false, 00:17:32.782 "supported_io_types": { 00:17:32.782 "read": true, 00:17:32.782 "write": true, 00:17:32.782 "unmap": false, 00:17:32.782 "flush": false, 00:17:32.782 "reset": true, 00:17:32.782 "nvme_admin": false, 00:17:32.782 "nvme_io": false, 00:17:32.782 "nvme_io_md": false, 00:17:32.782 "write_zeroes": true, 00:17:32.782 "zcopy": false, 00:17:32.782 "get_zone_info": false, 00:17:32.782 "zone_management": false, 00:17:32.782 "zone_append": false, 00:17:32.782 "compare": false, 00:17:32.782 "compare_and_write": false, 00:17:32.782 "abort": false, 00:17:32.782 "seek_hole": false, 00:17:32.782 "seek_data": false, 00:17:32.782 "copy": false, 00:17:32.782 "nvme_iov_md": false 00:17:32.782 }, 00:17:32.782 "driver_specific": { 00:17:32.782 "raid": { 00:17:32.782 "uuid": "25ee7d4d-c1ba-4ee2-a025-03daeaf7919b", 00:17:32.782 "strip_size_kb": 64, 00:17:32.782 "state": "online", 00:17:32.782 "raid_level": "raid5f", 00:17:32.782 "superblock": false, 00:17:32.782 "num_base_bdevs": 4, 00:17:32.782 "num_base_bdevs_discovered": 4, 00:17:32.782 "num_base_bdevs_operational": 4, 00:17:32.782 "base_bdevs_list": [ 00:17:32.782 { 00:17:32.782 "name": "BaseBdev1", 00:17:32.782 "uuid": "0fe8a1c6-df6a-45a6-ae2a-bb3437843629", 00:17:32.782 "is_configured": true, 00:17:32.782 "data_offset": 0, 00:17:32.782 "data_size": 65536 00:17:32.782 }, 00:17:32.782 { 00:17:32.782 "name": "BaseBdev2", 00:17:32.782 "uuid": "1337d551-c6bf-49b3-bb5d-752290e6ac32", 00:17:32.782 "is_configured": true, 00:17:32.782 "data_offset": 0, 00:17:32.782 "data_size": 65536 00:17:32.782 }, 00:17:32.782 { 00:17:32.782 "name": "BaseBdev3", 00:17:32.782 "uuid": "14c64c60-bfbe-4642-a08e-7b62c00ad930", 00:17:32.782 "is_configured": true, 00:17:32.782 "data_offset": 0, 00:17:32.782 "data_size": 65536 00:17:32.782 }, 00:17:32.782 { 00:17:32.782 "name": "BaseBdev4", 00:17:32.782 "uuid": "ae6af998-a2cb-426e-b374-5ab05a091049", 00:17:32.782 "is_configured": true, 00:17:32.782 "data_offset": 0, 00:17:32.782 "data_size": 65536 00:17:32.782 } 00:17:32.782 ] 00:17:32.782 } 00:17:32.782 } 00:17:32.782 }' 00:17:32.782 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:33.040 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:33.040 BaseBdev2 00:17:33.040 BaseBdev3 00:17:33.040 BaseBdev4' 00:17:33.040 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.040 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:33.040 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.040 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:33.040 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.040 11:27:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.040 11:27:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.040 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.040 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.040 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.040 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.040 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:33.040 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.040 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.040 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.040 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.040 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.040 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.040 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.040 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:33.040 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.040 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.040 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.040 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.040 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.040 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.040 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.040 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.040 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:33.040 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.040 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.298 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.298 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.298 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.298 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:33.298 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.298 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.298 [2024-11-20 11:27:16.196769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:33.298 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.298 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:33.298 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:33.298 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:33.298 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:33.298 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:33.298 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:33.298 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:33.298 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.298 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:33.298 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.298 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:33.298 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.298 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.298 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.299 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.299 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.299 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.299 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.299 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.299 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.299 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.299 "name": "Existed_Raid", 00:17:33.299 "uuid": "25ee7d4d-c1ba-4ee2-a025-03daeaf7919b", 00:17:33.299 "strip_size_kb": 64, 00:17:33.299 "state": "online", 00:17:33.299 "raid_level": "raid5f", 00:17:33.299 "superblock": false, 00:17:33.299 "num_base_bdevs": 4, 00:17:33.299 "num_base_bdevs_discovered": 3, 00:17:33.299 "num_base_bdevs_operational": 3, 00:17:33.299 "base_bdevs_list": [ 00:17:33.299 { 00:17:33.299 "name": null, 00:17:33.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.299 "is_configured": false, 00:17:33.299 "data_offset": 0, 00:17:33.299 "data_size": 65536 00:17:33.299 }, 00:17:33.299 { 00:17:33.299 "name": "BaseBdev2", 00:17:33.299 "uuid": "1337d551-c6bf-49b3-bb5d-752290e6ac32", 00:17:33.299 "is_configured": true, 00:17:33.299 "data_offset": 0, 00:17:33.299 "data_size": 65536 00:17:33.299 }, 00:17:33.299 { 00:17:33.299 "name": "BaseBdev3", 00:17:33.299 "uuid": "14c64c60-bfbe-4642-a08e-7b62c00ad930", 00:17:33.299 "is_configured": true, 00:17:33.299 "data_offset": 0, 00:17:33.299 "data_size": 65536 00:17:33.299 }, 00:17:33.299 { 00:17:33.299 "name": "BaseBdev4", 00:17:33.299 "uuid": "ae6af998-a2cb-426e-b374-5ab05a091049", 00:17:33.299 "is_configured": true, 00:17:33.299 "data_offset": 0, 00:17:33.299 "data_size": 65536 00:17:33.299 } 00:17:33.299 ] 00:17:33.299 }' 00:17:33.299 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.299 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.866 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:33.866 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:33.866 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.866 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.866 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.866 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:33.866 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.866 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:33.866 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:33.866 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:33.866 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.866 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.866 [2024-11-20 11:27:16.782007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:33.866 [2024-11-20 11:27:16.782162] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:33.866 [2024-11-20 11:27:16.885512] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.866 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.866 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:33.866 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:33.866 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.866 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.866 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.866 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:33.866 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.866 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:33.866 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:33.866 11:27:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:33.866 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.866 11:27:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.866 [2024-11-20 11:27:16.945475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:34.125 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.125 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:34.125 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:34.125 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.125 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.125 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.125 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:34.125 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.126 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:34.126 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:34.126 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:34.126 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.126 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.126 [2024-11-20 11:27:17.109177] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:34.126 [2024-11-20 11:27:17.109271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:34.126 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.126 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:34.126 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:34.126 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.126 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:34.126 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.126 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.126 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.385 BaseBdev2 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.385 [ 00:17:34.385 { 00:17:34.385 "name": "BaseBdev2", 00:17:34.385 "aliases": [ 00:17:34.385 "9ff7d183-d858-446d-ad92-f8444ff969d6" 00:17:34.385 ], 00:17:34.385 "product_name": "Malloc disk", 00:17:34.385 "block_size": 512, 00:17:34.385 "num_blocks": 65536, 00:17:34.385 "uuid": "9ff7d183-d858-446d-ad92-f8444ff969d6", 00:17:34.385 "assigned_rate_limits": { 00:17:34.385 "rw_ios_per_sec": 0, 00:17:34.385 "rw_mbytes_per_sec": 0, 00:17:34.385 "r_mbytes_per_sec": 0, 00:17:34.385 "w_mbytes_per_sec": 0 00:17:34.385 }, 00:17:34.385 "claimed": false, 00:17:34.385 "zoned": false, 00:17:34.385 "supported_io_types": { 00:17:34.385 "read": true, 00:17:34.385 "write": true, 00:17:34.385 "unmap": true, 00:17:34.385 "flush": true, 00:17:34.385 "reset": true, 00:17:34.385 "nvme_admin": false, 00:17:34.385 "nvme_io": false, 00:17:34.385 "nvme_io_md": false, 00:17:34.385 "write_zeroes": true, 00:17:34.385 "zcopy": true, 00:17:34.385 "get_zone_info": false, 00:17:34.385 "zone_management": false, 00:17:34.385 "zone_append": false, 00:17:34.385 "compare": false, 00:17:34.385 "compare_and_write": false, 00:17:34.385 "abort": true, 00:17:34.385 "seek_hole": false, 00:17:34.385 "seek_data": false, 00:17:34.385 "copy": true, 00:17:34.385 "nvme_iov_md": false 00:17:34.385 }, 00:17:34.385 "memory_domains": [ 00:17:34.385 { 00:17:34.385 "dma_device_id": "system", 00:17:34.385 "dma_device_type": 1 00:17:34.385 }, 00:17:34.385 { 00:17:34.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.385 "dma_device_type": 2 00:17:34.385 } 00:17:34.385 ], 00:17:34.385 "driver_specific": {} 00:17:34.385 } 00:17:34.385 ] 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.385 BaseBdev3 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.385 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.385 [ 00:17:34.385 { 00:17:34.385 "name": "BaseBdev3", 00:17:34.385 "aliases": [ 00:17:34.385 "746c0d11-9cdc-43ae-baa4-638819ce279a" 00:17:34.385 ], 00:17:34.385 "product_name": "Malloc disk", 00:17:34.385 "block_size": 512, 00:17:34.385 "num_blocks": 65536, 00:17:34.385 "uuid": "746c0d11-9cdc-43ae-baa4-638819ce279a", 00:17:34.385 "assigned_rate_limits": { 00:17:34.385 "rw_ios_per_sec": 0, 00:17:34.385 "rw_mbytes_per_sec": 0, 00:17:34.385 "r_mbytes_per_sec": 0, 00:17:34.385 "w_mbytes_per_sec": 0 00:17:34.385 }, 00:17:34.385 "claimed": false, 00:17:34.385 "zoned": false, 00:17:34.385 "supported_io_types": { 00:17:34.385 "read": true, 00:17:34.385 "write": true, 00:17:34.385 "unmap": true, 00:17:34.385 "flush": true, 00:17:34.385 "reset": true, 00:17:34.385 "nvme_admin": false, 00:17:34.385 "nvme_io": false, 00:17:34.386 "nvme_io_md": false, 00:17:34.386 "write_zeroes": true, 00:17:34.386 "zcopy": true, 00:17:34.386 "get_zone_info": false, 00:17:34.386 "zone_management": false, 00:17:34.386 "zone_append": false, 00:17:34.386 "compare": false, 00:17:34.386 "compare_and_write": false, 00:17:34.386 "abort": true, 00:17:34.386 "seek_hole": false, 00:17:34.386 "seek_data": false, 00:17:34.386 "copy": true, 00:17:34.386 "nvme_iov_md": false 00:17:34.386 }, 00:17:34.386 "memory_domains": [ 00:17:34.386 { 00:17:34.386 "dma_device_id": "system", 00:17:34.386 "dma_device_type": 1 00:17:34.386 }, 00:17:34.386 { 00:17:34.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.386 "dma_device_type": 2 00:17:34.386 } 00:17:34.386 ], 00:17:34.386 "driver_specific": {} 00:17:34.386 } 00:17:34.386 ] 00:17:34.386 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.386 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:34.386 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:34.386 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:34.386 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:34.386 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.386 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.386 BaseBdev4 00:17:34.386 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.386 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:34.386 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:34.386 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:34.386 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:34.386 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:34.386 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:34.386 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:34.386 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.386 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.386 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.386 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:34.386 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.386 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.645 [ 00:17:34.645 { 00:17:34.645 "name": "BaseBdev4", 00:17:34.645 "aliases": [ 00:17:34.645 "3e8a94f4-9f65-4dae-8e41-9ad3e7593f1a" 00:17:34.645 ], 00:17:34.645 "product_name": "Malloc disk", 00:17:34.645 "block_size": 512, 00:17:34.645 "num_blocks": 65536, 00:17:34.645 "uuid": "3e8a94f4-9f65-4dae-8e41-9ad3e7593f1a", 00:17:34.645 "assigned_rate_limits": { 00:17:34.645 "rw_ios_per_sec": 0, 00:17:34.645 "rw_mbytes_per_sec": 0, 00:17:34.645 "r_mbytes_per_sec": 0, 00:17:34.645 "w_mbytes_per_sec": 0 00:17:34.645 }, 00:17:34.645 "claimed": false, 00:17:34.645 "zoned": false, 00:17:34.645 "supported_io_types": { 00:17:34.645 "read": true, 00:17:34.645 "write": true, 00:17:34.645 "unmap": true, 00:17:34.645 "flush": true, 00:17:34.645 "reset": true, 00:17:34.645 "nvme_admin": false, 00:17:34.645 "nvme_io": false, 00:17:34.645 "nvme_io_md": false, 00:17:34.645 "write_zeroes": true, 00:17:34.645 "zcopy": true, 00:17:34.645 "get_zone_info": false, 00:17:34.645 "zone_management": false, 00:17:34.645 "zone_append": false, 00:17:34.645 "compare": false, 00:17:34.645 "compare_and_write": false, 00:17:34.645 "abort": true, 00:17:34.645 "seek_hole": false, 00:17:34.645 "seek_data": false, 00:17:34.645 "copy": true, 00:17:34.645 "nvme_iov_md": false 00:17:34.645 }, 00:17:34.645 "memory_domains": [ 00:17:34.645 { 00:17:34.645 "dma_device_id": "system", 00:17:34.645 "dma_device_type": 1 00:17:34.645 }, 00:17:34.645 { 00:17:34.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.645 "dma_device_type": 2 00:17:34.645 } 00:17:34.645 ], 00:17:34.645 "driver_specific": {} 00:17:34.645 } 00:17:34.645 ] 00:17:34.645 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.645 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:34.645 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:34.645 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:34.645 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:34.645 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.645 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.645 [2024-11-20 11:27:17.530257] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:34.645 [2024-11-20 11:27:17.530303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:34.645 [2024-11-20 11:27:17.530345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:34.645 [2024-11-20 11:27:17.532407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:34.645 [2024-11-20 11:27:17.532481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:34.645 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.646 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:34.646 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:34.646 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:34.646 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:34.646 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.646 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:34.646 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.646 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.646 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.646 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.646 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.646 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.646 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.646 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.646 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.646 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.646 "name": "Existed_Raid", 00:17:34.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.646 "strip_size_kb": 64, 00:17:34.646 "state": "configuring", 00:17:34.646 "raid_level": "raid5f", 00:17:34.646 "superblock": false, 00:17:34.646 "num_base_bdevs": 4, 00:17:34.646 "num_base_bdevs_discovered": 3, 00:17:34.646 "num_base_bdevs_operational": 4, 00:17:34.646 "base_bdevs_list": [ 00:17:34.646 { 00:17:34.646 "name": "BaseBdev1", 00:17:34.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.646 "is_configured": false, 00:17:34.646 "data_offset": 0, 00:17:34.646 "data_size": 0 00:17:34.646 }, 00:17:34.646 { 00:17:34.646 "name": "BaseBdev2", 00:17:34.646 "uuid": "9ff7d183-d858-446d-ad92-f8444ff969d6", 00:17:34.646 "is_configured": true, 00:17:34.646 "data_offset": 0, 00:17:34.646 "data_size": 65536 00:17:34.646 }, 00:17:34.646 { 00:17:34.646 "name": "BaseBdev3", 00:17:34.646 "uuid": "746c0d11-9cdc-43ae-baa4-638819ce279a", 00:17:34.646 "is_configured": true, 00:17:34.646 "data_offset": 0, 00:17:34.646 "data_size": 65536 00:17:34.646 }, 00:17:34.646 { 00:17:34.646 "name": "BaseBdev4", 00:17:34.646 "uuid": "3e8a94f4-9f65-4dae-8e41-9ad3e7593f1a", 00:17:34.646 "is_configured": true, 00:17:34.646 "data_offset": 0, 00:17:34.646 "data_size": 65536 00:17:34.646 } 00:17:34.646 ] 00:17:34.646 }' 00:17:34.646 11:27:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.646 11:27:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.904 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:34.904 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.904 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.162 [2024-11-20 11:27:18.021449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:35.162 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.162 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:35.162 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.162 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.162 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.162 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.162 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:35.162 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.162 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.162 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.162 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.162 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.162 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.163 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.163 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.163 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.163 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.163 "name": "Existed_Raid", 00:17:35.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.163 "strip_size_kb": 64, 00:17:35.163 "state": "configuring", 00:17:35.163 "raid_level": "raid5f", 00:17:35.163 "superblock": false, 00:17:35.163 "num_base_bdevs": 4, 00:17:35.163 "num_base_bdevs_discovered": 2, 00:17:35.163 "num_base_bdevs_operational": 4, 00:17:35.163 "base_bdevs_list": [ 00:17:35.163 { 00:17:35.163 "name": "BaseBdev1", 00:17:35.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.163 "is_configured": false, 00:17:35.163 "data_offset": 0, 00:17:35.163 "data_size": 0 00:17:35.163 }, 00:17:35.163 { 00:17:35.163 "name": null, 00:17:35.163 "uuid": "9ff7d183-d858-446d-ad92-f8444ff969d6", 00:17:35.163 "is_configured": false, 00:17:35.163 "data_offset": 0, 00:17:35.163 "data_size": 65536 00:17:35.163 }, 00:17:35.163 { 00:17:35.163 "name": "BaseBdev3", 00:17:35.163 "uuid": "746c0d11-9cdc-43ae-baa4-638819ce279a", 00:17:35.163 "is_configured": true, 00:17:35.163 "data_offset": 0, 00:17:35.163 "data_size": 65536 00:17:35.163 }, 00:17:35.163 { 00:17:35.163 "name": "BaseBdev4", 00:17:35.163 "uuid": "3e8a94f4-9f65-4dae-8e41-9ad3e7593f1a", 00:17:35.163 "is_configured": true, 00:17:35.163 "data_offset": 0, 00:17:35.163 "data_size": 65536 00:17:35.163 } 00:17:35.163 ] 00:17:35.163 }' 00:17:35.163 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.163 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.429 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.429 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.429 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.429 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:35.429 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.429 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:35.429 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:35.429 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.429 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.740 [2024-11-20 11:27:18.563727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:35.740 BaseBdev1 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.740 [ 00:17:35.740 { 00:17:35.740 "name": "BaseBdev1", 00:17:35.740 "aliases": [ 00:17:35.740 "0ac65dfc-8202-4cf5-bd81-25e009f1608e" 00:17:35.740 ], 00:17:35.740 "product_name": "Malloc disk", 00:17:35.740 "block_size": 512, 00:17:35.740 "num_blocks": 65536, 00:17:35.740 "uuid": "0ac65dfc-8202-4cf5-bd81-25e009f1608e", 00:17:35.740 "assigned_rate_limits": { 00:17:35.740 "rw_ios_per_sec": 0, 00:17:35.740 "rw_mbytes_per_sec": 0, 00:17:35.740 "r_mbytes_per_sec": 0, 00:17:35.740 "w_mbytes_per_sec": 0 00:17:35.740 }, 00:17:35.740 "claimed": true, 00:17:35.740 "claim_type": "exclusive_write", 00:17:35.740 "zoned": false, 00:17:35.740 "supported_io_types": { 00:17:35.740 "read": true, 00:17:35.740 "write": true, 00:17:35.740 "unmap": true, 00:17:35.740 "flush": true, 00:17:35.740 "reset": true, 00:17:35.740 "nvme_admin": false, 00:17:35.740 "nvme_io": false, 00:17:35.740 "nvme_io_md": false, 00:17:35.740 "write_zeroes": true, 00:17:35.740 "zcopy": true, 00:17:35.740 "get_zone_info": false, 00:17:35.740 "zone_management": false, 00:17:35.740 "zone_append": false, 00:17:35.740 "compare": false, 00:17:35.740 "compare_and_write": false, 00:17:35.740 "abort": true, 00:17:35.740 "seek_hole": false, 00:17:35.740 "seek_data": false, 00:17:35.740 "copy": true, 00:17:35.740 "nvme_iov_md": false 00:17:35.740 }, 00:17:35.740 "memory_domains": [ 00:17:35.740 { 00:17:35.740 "dma_device_id": "system", 00:17:35.740 "dma_device_type": 1 00:17:35.740 }, 00:17:35.740 { 00:17:35.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.740 "dma_device_type": 2 00:17:35.740 } 00:17:35.740 ], 00:17:35.740 "driver_specific": {} 00:17:35.740 } 00:17:35.740 ] 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.740 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.740 "name": "Existed_Raid", 00:17:35.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.740 "strip_size_kb": 64, 00:17:35.740 "state": "configuring", 00:17:35.740 "raid_level": "raid5f", 00:17:35.740 "superblock": false, 00:17:35.740 "num_base_bdevs": 4, 00:17:35.740 "num_base_bdevs_discovered": 3, 00:17:35.740 "num_base_bdevs_operational": 4, 00:17:35.740 "base_bdevs_list": [ 00:17:35.740 { 00:17:35.740 "name": "BaseBdev1", 00:17:35.740 "uuid": "0ac65dfc-8202-4cf5-bd81-25e009f1608e", 00:17:35.740 "is_configured": true, 00:17:35.740 "data_offset": 0, 00:17:35.740 "data_size": 65536 00:17:35.740 }, 00:17:35.740 { 00:17:35.740 "name": null, 00:17:35.740 "uuid": "9ff7d183-d858-446d-ad92-f8444ff969d6", 00:17:35.741 "is_configured": false, 00:17:35.741 "data_offset": 0, 00:17:35.741 "data_size": 65536 00:17:35.741 }, 00:17:35.741 { 00:17:35.741 "name": "BaseBdev3", 00:17:35.741 "uuid": "746c0d11-9cdc-43ae-baa4-638819ce279a", 00:17:35.741 "is_configured": true, 00:17:35.741 "data_offset": 0, 00:17:35.741 "data_size": 65536 00:17:35.741 }, 00:17:35.741 { 00:17:35.741 "name": "BaseBdev4", 00:17:35.741 "uuid": "3e8a94f4-9f65-4dae-8e41-9ad3e7593f1a", 00:17:35.741 "is_configured": true, 00:17:35.741 "data_offset": 0, 00:17:35.741 "data_size": 65536 00:17:35.741 } 00:17:35.741 ] 00:17:35.741 }' 00:17:35.741 11:27:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.741 11:27:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.001 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:36.001 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.001 11:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.001 11:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.274 11:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.274 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:36.274 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:36.274 11:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.274 11:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.274 [2024-11-20 11:27:19.146870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:36.274 11:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.275 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:36.275 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:36.275 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.275 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:36.275 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.275 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:36.275 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.275 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.275 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.275 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.275 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.275 11:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.275 11:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.275 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.275 11:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.275 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.275 "name": "Existed_Raid", 00:17:36.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.275 "strip_size_kb": 64, 00:17:36.275 "state": "configuring", 00:17:36.275 "raid_level": "raid5f", 00:17:36.275 "superblock": false, 00:17:36.275 "num_base_bdevs": 4, 00:17:36.275 "num_base_bdevs_discovered": 2, 00:17:36.275 "num_base_bdevs_operational": 4, 00:17:36.275 "base_bdevs_list": [ 00:17:36.275 { 00:17:36.275 "name": "BaseBdev1", 00:17:36.275 "uuid": "0ac65dfc-8202-4cf5-bd81-25e009f1608e", 00:17:36.275 "is_configured": true, 00:17:36.275 "data_offset": 0, 00:17:36.275 "data_size": 65536 00:17:36.275 }, 00:17:36.276 { 00:17:36.276 "name": null, 00:17:36.276 "uuid": "9ff7d183-d858-446d-ad92-f8444ff969d6", 00:17:36.276 "is_configured": false, 00:17:36.276 "data_offset": 0, 00:17:36.276 "data_size": 65536 00:17:36.276 }, 00:17:36.276 { 00:17:36.276 "name": null, 00:17:36.276 "uuid": "746c0d11-9cdc-43ae-baa4-638819ce279a", 00:17:36.276 "is_configured": false, 00:17:36.276 "data_offset": 0, 00:17:36.276 "data_size": 65536 00:17:36.276 }, 00:17:36.276 { 00:17:36.276 "name": "BaseBdev4", 00:17:36.276 "uuid": "3e8a94f4-9f65-4dae-8e41-9ad3e7593f1a", 00:17:36.276 "is_configured": true, 00:17:36.276 "data_offset": 0, 00:17:36.276 "data_size": 65536 00:17:36.276 } 00:17:36.276 ] 00:17:36.276 }' 00:17:36.276 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.276 11:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.551 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.551 11:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.551 11:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.551 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:36.551 11:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.551 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:36.551 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:36.551 11:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.551 11:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.551 [2024-11-20 11:27:19.657998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:36.551 11:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.551 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:36.551 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:36.551 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.551 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:36.551 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.551 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:36.551 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.551 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.551 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.811 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.811 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.811 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.811 11:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.811 11:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.811 11:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.811 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.811 "name": "Existed_Raid", 00:17:36.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.811 "strip_size_kb": 64, 00:17:36.811 "state": "configuring", 00:17:36.811 "raid_level": "raid5f", 00:17:36.811 "superblock": false, 00:17:36.811 "num_base_bdevs": 4, 00:17:36.811 "num_base_bdevs_discovered": 3, 00:17:36.811 "num_base_bdevs_operational": 4, 00:17:36.811 "base_bdevs_list": [ 00:17:36.811 { 00:17:36.811 "name": "BaseBdev1", 00:17:36.811 "uuid": "0ac65dfc-8202-4cf5-bd81-25e009f1608e", 00:17:36.811 "is_configured": true, 00:17:36.811 "data_offset": 0, 00:17:36.811 "data_size": 65536 00:17:36.811 }, 00:17:36.811 { 00:17:36.811 "name": null, 00:17:36.811 "uuid": "9ff7d183-d858-446d-ad92-f8444ff969d6", 00:17:36.811 "is_configured": false, 00:17:36.811 "data_offset": 0, 00:17:36.811 "data_size": 65536 00:17:36.811 }, 00:17:36.811 { 00:17:36.811 "name": "BaseBdev3", 00:17:36.811 "uuid": "746c0d11-9cdc-43ae-baa4-638819ce279a", 00:17:36.811 "is_configured": true, 00:17:36.811 "data_offset": 0, 00:17:36.811 "data_size": 65536 00:17:36.811 }, 00:17:36.811 { 00:17:36.811 "name": "BaseBdev4", 00:17:36.811 "uuid": "3e8a94f4-9f65-4dae-8e41-9ad3e7593f1a", 00:17:36.811 "is_configured": true, 00:17:36.811 "data_offset": 0, 00:17:36.811 "data_size": 65536 00:17:36.811 } 00:17:36.811 ] 00:17:36.811 }' 00:17:36.811 11:27:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.811 11:27:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.072 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.072 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:37.072 11:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.072 11:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.072 11:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.331 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:37.331 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:37.331 11:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.331 11:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.331 [2024-11-20 11:27:20.197131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:37.331 11:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.331 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:37.331 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.331 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.331 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.331 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.331 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:37.331 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.331 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.331 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.332 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.332 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.332 11:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.332 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.332 11:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.332 11:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.332 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.332 "name": "Existed_Raid", 00:17:37.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.332 "strip_size_kb": 64, 00:17:37.332 "state": "configuring", 00:17:37.332 "raid_level": "raid5f", 00:17:37.332 "superblock": false, 00:17:37.332 "num_base_bdevs": 4, 00:17:37.332 "num_base_bdevs_discovered": 2, 00:17:37.332 "num_base_bdevs_operational": 4, 00:17:37.332 "base_bdevs_list": [ 00:17:37.332 { 00:17:37.332 "name": null, 00:17:37.332 "uuid": "0ac65dfc-8202-4cf5-bd81-25e009f1608e", 00:17:37.332 "is_configured": false, 00:17:37.332 "data_offset": 0, 00:17:37.332 "data_size": 65536 00:17:37.332 }, 00:17:37.332 { 00:17:37.332 "name": null, 00:17:37.332 "uuid": "9ff7d183-d858-446d-ad92-f8444ff969d6", 00:17:37.332 "is_configured": false, 00:17:37.332 "data_offset": 0, 00:17:37.332 "data_size": 65536 00:17:37.332 }, 00:17:37.332 { 00:17:37.332 "name": "BaseBdev3", 00:17:37.332 "uuid": "746c0d11-9cdc-43ae-baa4-638819ce279a", 00:17:37.332 "is_configured": true, 00:17:37.332 "data_offset": 0, 00:17:37.332 "data_size": 65536 00:17:37.332 }, 00:17:37.332 { 00:17:37.332 "name": "BaseBdev4", 00:17:37.332 "uuid": "3e8a94f4-9f65-4dae-8e41-9ad3e7593f1a", 00:17:37.332 "is_configured": true, 00:17:37.332 "data_offset": 0, 00:17:37.332 "data_size": 65536 00:17:37.332 } 00:17:37.332 ] 00:17:37.332 }' 00:17:37.332 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.332 11:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.902 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.902 11:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.902 11:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.902 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:37.902 11:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.902 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:37.902 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:37.902 11:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.902 11:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.902 [2024-11-20 11:27:20.846542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:37.902 11:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.902 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:37.902 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.902 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.903 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.903 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.903 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:37.903 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.903 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.903 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.903 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.903 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.903 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.903 11:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.903 11:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.903 11:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.903 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.903 "name": "Existed_Raid", 00:17:37.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.903 "strip_size_kb": 64, 00:17:37.903 "state": "configuring", 00:17:37.903 "raid_level": "raid5f", 00:17:37.903 "superblock": false, 00:17:37.903 "num_base_bdevs": 4, 00:17:37.903 "num_base_bdevs_discovered": 3, 00:17:37.903 "num_base_bdevs_operational": 4, 00:17:37.903 "base_bdevs_list": [ 00:17:37.903 { 00:17:37.903 "name": null, 00:17:37.903 "uuid": "0ac65dfc-8202-4cf5-bd81-25e009f1608e", 00:17:37.903 "is_configured": false, 00:17:37.903 "data_offset": 0, 00:17:37.903 "data_size": 65536 00:17:37.903 }, 00:17:37.903 { 00:17:37.903 "name": "BaseBdev2", 00:17:37.903 "uuid": "9ff7d183-d858-446d-ad92-f8444ff969d6", 00:17:37.903 "is_configured": true, 00:17:37.903 "data_offset": 0, 00:17:37.903 "data_size": 65536 00:17:37.903 }, 00:17:37.903 { 00:17:37.903 "name": "BaseBdev3", 00:17:37.903 "uuid": "746c0d11-9cdc-43ae-baa4-638819ce279a", 00:17:37.903 "is_configured": true, 00:17:37.903 "data_offset": 0, 00:17:37.903 "data_size": 65536 00:17:37.903 }, 00:17:37.903 { 00:17:37.903 "name": "BaseBdev4", 00:17:37.903 "uuid": "3e8a94f4-9f65-4dae-8e41-9ad3e7593f1a", 00:17:37.903 "is_configured": true, 00:17:37.903 "data_offset": 0, 00:17:37.903 "data_size": 65536 00:17:37.903 } 00:17:37.903 ] 00:17:37.903 }' 00:17:37.903 11:27:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.903 11:27:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0ac65dfc-8202-4cf5-bd81-25e009f1608e 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.473 [2024-11-20 11:27:21.423884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:38.473 [2024-11-20 11:27:21.423940] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:38.473 [2024-11-20 11:27:21.423948] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:38.473 [2024-11-20 11:27:21.424228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:38.473 [2024-11-20 11:27:21.431622] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:38.473 [2024-11-20 11:27:21.431653] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:38.473 [2024-11-20 11:27:21.431930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.473 NewBaseBdev 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.473 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:38.474 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.474 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.474 [ 00:17:38.474 { 00:17:38.474 "name": "NewBaseBdev", 00:17:38.474 "aliases": [ 00:17:38.474 "0ac65dfc-8202-4cf5-bd81-25e009f1608e" 00:17:38.474 ], 00:17:38.474 "product_name": "Malloc disk", 00:17:38.474 "block_size": 512, 00:17:38.474 "num_blocks": 65536, 00:17:38.474 "uuid": "0ac65dfc-8202-4cf5-bd81-25e009f1608e", 00:17:38.474 "assigned_rate_limits": { 00:17:38.474 "rw_ios_per_sec": 0, 00:17:38.474 "rw_mbytes_per_sec": 0, 00:17:38.474 "r_mbytes_per_sec": 0, 00:17:38.474 "w_mbytes_per_sec": 0 00:17:38.474 }, 00:17:38.474 "claimed": true, 00:17:38.474 "claim_type": "exclusive_write", 00:17:38.474 "zoned": false, 00:17:38.474 "supported_io_types": { 00:17:38.474 "read": true, 00:17:38.474 "write": true, 00:17:38.474 "unmap": true, 00:17:38.474 "flush": true, 00:17:38.474 "reset": true, 00:17:38.474 "nvme_admin": false, 00:17:38.474 "nvme_io": false, 00:17:38.474 "nvme_io_md": false, 00:17:38.474 "write_zeroes": true, 00:17:38.474 "zcopy": true, 00:17:38.474 "get_zone_info": false, 00:17:38.474 "zone_management": false, 00:17:38.474 "zone_append": false, 00:17:38.474 "compare": false, 00:17:38.474 "compare_and_write": false, 00:17:38.474 "abort": true, 00:17:38.474 "seek_hole": false, 00:17:38.474 "seek_data": false, 00:17:38.474 "copy": true, 00:17:38.474 "nvme_iov_md": false 00:17:38.474 }, 00:17:38.474 "memory_domains": [ 00:17:38.474 { 00:17:38.474 "dma_device_id": "system", 00:17:38.474 "dma_device_type": 1 00:17:38.474 }, 00:17:38.474 { 00:17:38.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.474 "dma_device_type": 2 00:17:38.474 } 00:17:38.474 ], 00:17:38.474 "driver_specific": {} 00:17:38.474 } 00:17:38.474 ] 00:17:38.474 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.474 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:38.474 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:38.474 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:38.474 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.474 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.474 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.474 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:38.474 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.474 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.474 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.474 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.474 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.474 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.474 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.474 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.474 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.474 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.474 "name": "Existed_Raid", 00:17:38.474 "uuid": "ee6f5cf6-5c0b-4f0c-a246-c173d545edde", 00:17:38.474 "strip_size_kb": 64, 00:17:38.474 "state": "online", 00:17:38.474 "raid_level": "raid5f", 00:17:38.474 "superblock": false, 00:17:38.474 "num_base_bdevs": 4, 00:17:38.474 "num_base_bdevs_discovered": 4, 00:17:38.474 "num_base_bdevs_operational": 4, 00:17:38.474 "base_bdevs_list": [ 00:17:38.474 { 00:17:38.474 "name": "NewBaseBdev", 00:17:38.474 "uuid": "0ac65dfc-8202-4cf5-bd81-25e009f1608e", 00:17:38.474 "is_configured": true, 00:17:38.474 "data_offset": 0, 00:17:38.474 "data_size": 65536 00:17:38.474 }, 00:17:38.474 { 00:17:38.474 "name": "BaseBdev2", 00:17:38.474 "uuid": "9ff7d183-d858-446d-ad92-f8444ff969d6", 00:17:38.474 "is_configured": true, 00:17:38.474 "data_offset": 0, 00:17:38.474 "data_size": 65536 00:17:38.474 }, 00:17:38.474 { 00:17:38.474 "name": "BaseBdev3", 00:17:38.474 "uuid": "746c0d11-9cdc-43ae-baa4-638819ce279a", 00:17:38.474 "is_configured": true, 00:17:38.474 "data_offset": 0, 00:17:38.474 "data_size": 65536 00:17:38.474 }, 00:17:38.474 { 00:17:38.474 "name": "BaseBdev4", 00:17:38.474 "uuid": "3e8a94f4-9f65-4dae-8e41-9ad3e7593f1a", 00:17:38.474 "is_configured": true, 00:17:38.474 "data_offset": 0, 00:17:38.474 "data_size": 65536 00:17:38.474 } 00:17:38.474 ] 00:17:38.474 }' 00:17:38.474 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.474 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.051 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:39.051 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:39.051 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:39.051 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:39.051 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:39.051 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:39.051 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:39.051 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:39.051 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.051 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.051 [2024-11-20 11:27:21.901548] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:39.051 11:27:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.051 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:39.051 "name": "Existed_Raid", 00:17:39.051 "aliases": [ 00:17:39.051 "ee6f5cf6-5c0b-4f0c-a246-c173d545edde" 00:17:39.051 ], 00:17:39.051 "product_name": "Raid Volume", 00:17:39.051 "block_size": 512, 00:17:39.051 "num_blocks": 196608, 00:17:39.051 "uuid": "ee6f5cf6-5c0b-4f0c-a246-c173d545edde", 00:17:39.051 "assigned_rate_limits": { 00:17:39.051 "rw_ios_per_sec": 0, 00:17:39.051 "rw_mbytes_per_sec": 0, 00:17:39.051 "r_mbytes_per_sec": 0, 00:17:39.051 "w_mbytes_per_sec": 0 00:17:39.051 }, 00:17:39.051 "claimed": false, 00:17:39.051 "zoned": false, 00:17:39.051 "supported_io_types": { 00:17:39.051 "read": true, 00:17:39.051 "write": true, 00:17:39.051 "unmap": false, 00:17:39.051 "flush": false, 00:17:39.051 "reset": true, 00:17:39.051 "nvme_admin": false, 00:17:39.051 "nvme_io": false, 00:17:39.051 "nvme_io_md": false, 00:17:39.051 "write_zeroes": true, 00:17:39.051 "zcopy": false, 00:17:39.051 "get_zone_info": false, 00:17:39.051 "zone_management": false, 00:17:39.051 "zone_append": false, 00:17:39.051 "compare": false, 00:17:39.051 "compare_and_write": false, 00:17:39.051 "abort": false, 00:17:39.051 "seek_hole": false, 00:17:39.051 "seek_data": false, 00:17:39.051 "copy": false, 00:17:39.051 "nvme_iov_md": false 00:17:39.051 }, 00:17:39.051 "driver_specific": { 00:17:39.051 "raid": { 00:17:39.051 "uuid": "ee6f5cf6-5c0b-4f0c-a246-c173d545edde", 00:17:39.051 "strip_size_kb": 64, 00:17:39.051 "state": "online", 00:17:39.051 "raid_level": "raid5f", 00:17:39.051 "superblock": false, 00:17:39.051 "num_base_bdevs": 4, 00:17:39.051 "num_base_bdevs_discovered": 4, 00:17:39.051 "num_base_bdevs_operational": 4, 00:17:39.051 "base_bdevs_list": [ 00:17:39.051 { 00:17:39.051 "name": "NewBaseBdev", 00:17:39.051 "uuid": "0ac65dfc-8202-4cf5-bd81-25e009f1608e", 00:17:39.051 "is_configured": true, 00:17:39.051 "data_offset": 0, 00:17:39.051 "data_size": 65536 00:17:39.051 }, 00:17:39.051 { 00:17:39.051 "name": "BaseBdev2", 00:17:39.051 "uuid": "9ff7d183-d858-446d-ad92-f8444ff969d6", 00:17:39.051 "is_configured": true, 00:17:39.051 "data_offset": 0, 00:17:39.051 "data_size": 65536 00:17:39.051 }, 00:17:39.051 { 00:17:39.051 "name": "BaseBdev3", 00:17:39.051 "uuid": "746c0d11-9cdc-43ae-baa4-638819ce279a", 00:17:39.051 "is_configured": true, 00:17:39.051 "data_offset": 0, 00:17:39.051 "data_size": 65536 00:17:39.051 }, 00:17:39.051 { 00:17:39.051 "name": "BaseBdev4", 00:17:39.051 "uuid": "3e8a94f4-9f65-4dae-8e41-9ad3e7593f1a", 00:17:39.051 "is_configured": true, 00:17:39.051 "data_offset": 0, 00:17:39.051 "data_size": 65536 00:17:39.051 } 00:17:39.051 ] 00:17:39.051 } 00:17:39.051 } 00:17:39.051 }' 00:17:39.051 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:39.051 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:39.051 BaseBdev2 00:17:39.051 BaseBdev3 00:17:39.051 BaseBdev4' 00:17:39.051 11:27:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.051 11:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:39.051 11:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:39.051 11:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:39.051 11:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.051 11:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.051 11:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.051 11:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.051 11:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:39.051 11:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:39.051 11:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:39.052 11:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:39.052 11:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.052 11:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.052 11:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.052 11:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.052 11:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:39.052 11:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:39.052 11:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:39.052 11:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.052 11:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:39.052 11:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.052 11:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.052 11:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.321 11:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:39.321 11:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:39.321 11:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:39.321 11:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:39.321 11:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.321 11:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.321 11:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.321 11:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.321 11:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:39.321 11:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:39.321 11:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:39.321 11:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.321 11:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.321 [2024-11-20 11:27:22.236714] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:39.321 [2024-11-20 11:27:22.236808] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:39.321 [2024-11-20 11:27:22.236940] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.321 [2024-11-20 11:27:22.237286] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:39.321 [2024-11-20 11:27:22.237347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:39.321 11:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.321 11:27:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82981 00:17:39.321 11:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82981 ']' 00:17:39.321 11:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82981 00:17:39.321 11:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:17:39.321 11:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.321 11:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82981 00:17:39.321 killing process with pid 82981 00:17:39.321 11:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:39.321 11:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:39.321 11:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82981' 00:17:39.321 11:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82981 00:17:39.321 [2024-11-20 11:27:22.283031] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:39.321 11:27:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82981 00:17:39.580 [2024-11-20 11:27:22.692522] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:40.963 00:17:40.963 real 0m12.122s 00:17:40.963 user 0m19.291s 00:17:40.963 sys 0m2.155s 00:17:40.963 ************************************ 00:17:40.963 END TEST raid5f_state_function_test 00:17:40.963 ************************************ 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.963 11:27:23 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:17:40.963 11:27:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:40.963 11:27:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:40.963 11:27:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:40.963 ************************************ 00:17:40.963 START TEST raid5f_state_function_test_sb 00:17:40.963 ************************************ 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:40.963 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:40.964 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:40.964 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83658 00:17:40.964 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:40.964 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83658' 00:17:40.964 Process raid pid: 83658 00:17:40.964 11:27:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83658 00:17:40.964 11:27:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83658 ']' 00:17:40.964 11:27:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.964 11:27:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:40.964 11:27:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.964 11:27:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:40.964 11:27:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.964 [2024-11-20 11:27:23.988993] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:17:40.964 [2024-11-20 11:27:23.989183] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.223 [2024-11-20 11:27:24.161171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.223 [2024-11-20 11:27:24.275239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.482 [2024-11-20 11:27:24.474666] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.482 [2024-11-20 11:27:24.474710] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.742 11:27:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.742 11:27:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:41.742 11:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:41.742 11:27:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.742 11:27:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.742 [2024-11-20 11:27:24.851879] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:41.742 [2024-11-20 11:27:24.851982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:41.742 [2024-11-20 11:27:24.852014] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:41.742 [2024-11-20 11:27:24.852038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:41.742 [2024-11-20 11:27:24.852063] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:41.742 [2024-11-20 11:27:24.852084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:41.742 [2024-11-20 11:27:24.852120] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:41.742 [2024-11-20 11:27:24.852158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:42.002 11:27:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.002 11:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:42.002 11:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:42.002 11:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:42.002 11:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.002 11:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.002 11:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:42.002 11:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.002 11:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.002 11:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.002 11:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.002 11:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.002 11:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.002 11:27:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.002 11:27:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.002 11:27:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.002 11:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.002 "name": "Existed_Raid", 00:17:42.002 "uuid": "87a0cedf-52ac-4d35-bc6b-cb40d509169d", 00:17:42.002 "strip_size_kb": 64, 00:17:42.002 "state": "configuring", 00:17:42.002 "raid_level": "raid5f", 00:17:42.002 "superblock": true, 00:17:42.002 "num_base_bdevs": 4, 00:17:42.002 "num_base_bdevs_discovered": 0, 00:17:42.002 "num_base_bdevs_operational": 4, 00:17:42.002 "base_bdevs_list": [ 00:17:42.002 { 00:17:42.002 "name": "BaseBdev1", 00:17:42.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.002 "is_configured": false, 00:17:42.002 "data_offset": 0, 00:17:42.002 "data_size": 0 00:17:42.002 }, 00:17:42.002 { 00:17:42.002 "name": "BaseBdev2", 00:17:42.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.002 "is_configured": false, 00:17:42.002 "data_offset": 0, 00:17:42.002 "data_size": 0 00:17:42.002 }, 00:17:42.002 { 00:17:42.002 "name": "BaseBdev3", 00:17:42.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.002 "is_configured": false, 00:17:42.002 "data_offset": 0, 00:17:42.002 "data_size": 0 00:17:42.002 }, 00:17:42.002 { 00:17:42.002 "name": "BaseBdev4", 00:17:42.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.002 "is_configured": false, 00:17:42.002 "data_offset": 0, 00:17:42.002 "data_size": 0 00:17:42.002 } 00:17:42.002 ] 00:17:42.002 }' 00:17:42.002 11:27:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.002 11:27:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.262 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:42.262 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.262 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.262 [2024-11-20 11:27:25.311035] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:42.262 [2024-11-20 11:27:25.311131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:42.262 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.262 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:42.262 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.262 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.262 [2024-11-20 11:27:25.323033] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:42.262 [2024-11-20 11:27:25.323076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:42.262 [2024-11-20 11:27:25.323085] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:42.262 [2024-11-20 11:27:25.323094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:42.262 [2024-11-20 11:27:25.323101] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:42.262 [2024-11-20 11:27:25.323110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:42.262 [2024-11-20 11:27:25.323116] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:42.262 [2024-11-20 11:27:25.323125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:42.262 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.262 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:42.262 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.262 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.262 [2024-11-20 11:27:25.370742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:42.262 BaseBdev1 00:17:42.262 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.262 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:42.262 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:42.262 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:42.262 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:42.262 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:42.262 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:42.262 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:42.262 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.262 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.521 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.521 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:42.521 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.521 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.521 [ 00:17:42.521 { 00:17:42.521 "name": "BaseBdev1", 00:17:42.521 "aliases": [ 00:17:42.521 "842a9585-85f5-4ed7-bb2e-29bde8b0bbbc" 00:17:42.521 ], 00:17:42.521 "product_name": "Malloc disk", 00:17:42.521 "block_size": 512, 00:17:42.521 "num_blocks": 65536, 00:17:42.521 "uuid": "842a9585-85f5-4ed7-bb2e-29bde8b0bbbc", 00:17:42.521 "assigned_rate_limits": { 00:17:42.521 "rw_ios_per_sec": 0, 00:17:42.521 "rw_mbytes_per_sec": 0, 00:17:42.521 "r_mbytes_per_sec": 0, 00:17:42.521 "w_mbytes_per_sec": 0 00:17:42.521 }, 00:17:42.521 "claimed": true, 00:17:42.521 "claim_type": "exclusive_write", 00:17:42.521 "zoned": false, 00:17:42.521 "supported_io_types": { 00:17:42.521 "read": true, 00:17:42.521 "write": true, 00:17:42.521 "unmap": true, 00:17:42.521 "flush": true, 00:17:42.521 "reset": true, 00:17:42.521 "nvme_admin": false, 00:17:42.521 "nvme_io": false, 00:17:42.521 "nvme_io_md": false, 00:17:42.521 "write_zeroes": true, 00:17:42.521 "zcopy": true, 00:17:42.521 "get_zone_info": false, 00:17:42.521 "zone_management": false, 00:17:42.521 "zone_append": false, 00:17:42.521 "compare": false, 00:17:42.521 "compare_and_write": false, 00:17:42.521 "abort": true, 00:17:42.521 "seek_hole": false, 00:17:42.521 "seek_data": false, 00:17:42.521 "copy": true, 00:17:42.521 "nvme_iov_md": false 00:17:42.521 }, 00:17:42.521 "memory_domains": [ 00:17:42.521 { 00:17:42.521 "dma_device_id": "system", 00:17:42.521 "dma_device_type": 1 00:17:42.521 }, 00:17:42.521 { 00:17:42.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.521 "dma_device_type": 2 00:17:42.521 } 00:17:42.521 ], 00:17:42.521 "driver_specific": {} 00:17:42.521 } 00:17:42.521 ] 00:17:42.521 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.521 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:42.521 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:42.521 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:42.521 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:42.521 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.521 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.521 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:42.521 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.521 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.521 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.521 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.521 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.521 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.521 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.521 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.521 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.522 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.522 "name": "Existed_Raid", 00:17:42.522 "uuid": "b26c609a-ede6-4bb2-a955-cc7cd02a1055", 00:17:42.522 "strip_size_kb": 64, 00:17:42.522 "state": "configuring", 00:17:42.522 "raid_level": "raid5f", 00:17:42.522 "superblock": true, 00:17:42.522 "num_base_bdevs": 4, 00:17:42.522 "num_base_bdevs_discovered": 1, 00:17:42.522 "num_base_bdevs_operational": 4, 00:17:42.522 "base_bdevs_list": [ 00:17:42.522 { 00:17:42.522 "name": "BaseBdev1", 00:17:42.522 "uuid": "842a9585-85f5-4ed7-bb2e-29bde8b0bbbc", 00:17:42.522 "is_configured": true, 00:17:42.522 "data_offset": 2048, 00:17:42.522 "data_size": 63488 00:17:42.522 }, 00:17:42.522 { 00:17:42.522 "name": "BaseBdev2", 00:17:42.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.522 "is_configured": false, 00:17:42.522 "data_offset": 0, 00:17:42.522 "data_size": 0 00:17:42.522 }, 00:17:42.522 { 00:17:42.522 "name": "BaseBdev3", 00:17:42.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.522 "is_configured": false, 00:17:42.522 "data_offset": 0, 00:17:42.522 "data_size": 0 00:17:42.522 }, 00:17:42.522 { 00:17:42.522 "name": "BaseBdev4", 00:17:42.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.522 "is_configured": false, 00:17:42.522 "data_offset": 0, 00:17:42.522 "data_size": 0 00:17:42.522 } 00:17:42.522 ] 00:17:42.522 }' 00:17:42.522 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.522 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.781 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:42.781 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.781 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.781 [2024-11-20 11:27:25.849995] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:42.781 [2024-11-20 11:27:25.850111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:42.781 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.781 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:42.781 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.781 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.781 [2024-11-20 11:27:25.862026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:42.781 [2024-11-20 11:27:25.863911] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:42.781 [2024-11-20 11:27:25.864007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:42.781 [2024-11-20 11:27:25.864039] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:42.781 [2024-11-20 11:27:25.864067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:42.781 [2024-11-20 11:27:25.864088] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:42.781 [2024-11-20 11:27:25.864112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:42.781 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.781 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:42.781 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:42.781 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:42.781 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:42.781 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:42.781 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.781 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.782 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:42.782 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.782 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.782 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.782 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.782 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.782 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.782 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.782 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.782 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.041 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.041 "name": "Existed_Raid", 00:17:43.041 "uuid": "5eea3994-e1c7-4f37-a479-5ce2afda352f", 00:17:43.041 "strip_size_kb": 64, 00:17:43.041 "state": "configuring", 00:17:43.041 "raid_level": "raid5f", 00:17:43.041 "superblock": true, 00:17:43.041 "num_base_bdevs": 4, 00:17:43.041 "num_base_bdevs_discovered": 1, 00:17:43.041 "num_base_bdevs_operational": 4, 00:17:43.041 "base_bdevs_list": [ 00:17:43.041 { 00:17:43.041 "name": "BaseBdev1", 00:17:43.041 "uuid": "842a9585-85f5-4ed7-bb2e-29bde8b0bbbc", 00:17:43.041 "is_configured": true, 00:17:43.041 "data_offset": 2048, 00:17:43.041 "data_size": 63488 00:17:43.041 }, 00:17:43.041 { 00:17:43.041 "name": "BaseBdev2", 00:17:43.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.041 "is_configured": false, 00:17:43.041 "data_offset": 0, 00:17:43.041 "data_size": 0 00:17:43.041 }, 00:17:43.041 { 00:17:43.041 "name": "BaseBdev3", 00:17:43.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.041 "is_configured": false, 00:17:43.041 "data_offset": 0, 00:17:43.041 "data_size": 0 00:17:43.041 }, 00:17:43.041 { 00:17:43.041 "name": "BaseBdev4", 00:17:43.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.041 "is_configured": false, 00:17:43.041 "data_offset": 0, 00:17:43.041 "data_size": 0 00:17:43.041 } 00:17:43.041 ] 00:17:43.041 }' 00:17:43.041 11:27:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.041 11:27:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.302 [2024-11-20 11:27:26.341661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:43.302 BaseBdev2 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.302 [ 00:17:43.302 { 00:17:43.302 "name": "BaseBdev2", 00:17:43.302 "aliases": [ 00:17:43.302 "12f18d9e-bd39-4dfb-91d8-420a45a26167" 00:17:43.302 ], 00:17:43.302 "product_name": "Malloc disk", 00:17:43.302 "block_size": 512, 00:17:43.302 "num_blocks": 65536, 00:17:43.302 "uuid": "12f18d9e-bd39-4dfb-91d8-420a45a26167", 00:17:43.302 "assigned_rate_limits": { 00:17:43.302 "rw_ios_per_sec": 0, 00:17:43.302 "rw_mbytes_per_sec": 0, 00:17:43.302 "r_mbytes_per_sec": 0, 00:17:43.302 "w_mbytes_per_sec": 0 00:17:43.302 }, 00:17:43.302 "claimed": true, 00:17:43.302 "claim_type": "exclusive_write", 00:17:43.302 "zoned": false, 00:17:43.302 "supported_io_types": { 00:17:43.302 "read": true, 00:17:43.302 "write": true, 00:17:43.302 "unmap": true, 00:17:43.302 "flush": true, 00:17:43.302 "reset": true, 00:17:43.302 "nvme_admin": false, 00:17:43.302 "nvme_io": false, 00:17:43.302 "nvme_io_md": false, 00:17:43.302 "write_zeroes": true, 00:17:43.302 "zcopy": true, 00:17:43.302 "get_zone_info": false, 00:17:43.302 "zone_management": false, 00:17:43.302 "zone_append": false, 00:17:43.302 "compare": false, 00:17:43.302 "compare_and_write": false, 00:17:43.302 "abort": true, 00:17:43.302 "seek_hole": false, 00:17:43.302 "seek_data": false, 00:17:43.302 "copy": true, 00:17:43.302 "nvme_iov_md": false 00:17:43.302 }, 00:17:43.302 "memory_domains": [ 00:17:43.302 { 00:17:43.302 "dma_device_id": "system", 00:17:43.302 "dma_device_type": 1 00:17:43.302 }, 00:17:43.302 { 00:17:43.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.302 "dma_device_type": 2 00:17:43.302 } 00:17:43.302 ], 00:17:43.302 "driver_specific": {} 00:17:43.302 } 00:17:43.302 ] 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.302 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.579 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.579 "name": "Existed_Raid", 00:17:43.579 "uuid": "5eea3994-e1c7-4f37-a479-5ce2afda352f", 00:17:43.579 "strip_size_kb": 64, 00:17:43.579 "state": "configuring", 00:17:43.579 "raid_level": "raid5f", 00:17:43.579 "superblock": true, 00:17:43.579 "num_base_bdevs": 4, 00:17:43.579 "num_base_bdevs_discovered": 2, 00:17:43.579 "num_base_bdevs_operational": 4, 00:17:43.579 "base_bdevs_list": [ 00:17:43.579 { 00:17:43.579 "name": "BaseBdev1", 00:17:43.579 "uuid": "842a9585-85f5-4ed7-bb2e-29bde8b0bbbc", 00:17:43.579 "is_configured": true, 00:17:43.579 "data_offset": 2048, 00:17:43.579 "data_size": 63488 00:17:43.579 }, 00:17:43.579 { 00:17:43.579 "name": "BaseBdev2", 00:17:43.579 "uuid": "12f18d9e-bd39-4dfb-91d8-420a45a26167", 00:17:43.579 "is_configured": true, 00:17:43.579 "data_offset": 2048, 00:17:43.579 "data_size": 63488 00:17:43.579 }, 00:17:43.579 { 00:17:43.579 "name": "BaseBdev3", 00:17:43.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.579 "is_configured": false, 00:17:43.579 "data_offset": 0, 00:17:43.579 "data_size": 0 00:17:43.579 }, 00:17:43.579 { 00:17:43.579 "name": "BaseBdev4", 00:17:43.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.579 "is_configured": false, 00:17:43.579 "data_offset": 0, 00:17:43.579 "data_size": 0 00:17:43.579 } 00:17:43.579 ] 00:17:43.580 }' 00:17:43.580 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.580 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.839 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:43.839 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.839 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.839 [2024-11-20 11:27:26.886427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:43.839 BaseBdev3 00:17:43.839 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.839 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:43.839 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:43.839 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:43.839 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:43.839 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:43.839 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:43.839 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:43.839 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.839 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.839 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.839 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:43.839 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.839 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.839 [ 00:17:43.839 { 00:17:43.839 "name": "BaseBdev3", 00:17:43.839 "aliases": [ 00:17:43.839 "113b0fbe-a99c-4a62-b2ab-c05043eb2c78" 00:17:43.839 ], 00:17:43.839 "product_name": "Malloc disk", 00:17:43.839 "block_size": 512, 00:17:43.839 "num_blocks": 65536, 00:17:43.839 "uuid": "113b0fbe-a99c-4a62-b2ab-c05043eb2c78", 00:17:43.839 "assigned_rate_limits": { 00:17:43.839 "rw_ios_per_sec": 0, 00:17:43.839 "rw_mbytes_per_sec": 0, 00:17:43.839 "r_mbytes_per_sec": 0, 00:17:43.839 "w_mbytes_per_sec": 0 00:17:43.839 }, 00:17:43.839 "claimed": true, 00:17:43.839 "claim_type": "exclusive_write", 00:17:43.839 "zoned": false, 00:17:43.839 "supported_io_types": { 00:17:43.839 "read": true, 00:17:43.839 "write": true, 00:17:43.839 "unmap": true, 00:17:43.839 "flush": true, 00:17:43.839 "reset": true, 00:17:43.839 "nvme_admin": false, 00:17:43.839 "nvme_io": false, 00:17:43.839 "nvme_io_md": false, 00:17:43.839 "write_zeroes": true, 00:17:43.839 "zcopy": true, 00:17:43.839 "get_zone_info": false, 00:17:43.839 "zone_management": false, 00:17:43.839 "zone_append": false, 00:17:43.839 "compare": false, 00:17:43.839 "compare_and_write": false, 00:17:43.839 "abort": true, 00:17:43.839 "seek_hole": false, 00:17:43.839 "seek_data": false, 00:17:43.839 "copy": true, 00:17:43.839 "nvme_iov_md": false 00:17:43.839 }, 00:17:43.839 "memory_domains": [ 00:17:43.839 { 00:17:43.839 "dma_device_id": "system", 00:17:43.839 "dma_device_type": 1 00:17:43.839 }, 00:17:43.839 { 00:17:43.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.839 "dma_device_type": 2 00:17:43.839 } 00:17:43.839 ], 00:17:43.839 "driver_specific": {} 00:17:43.839 } 00:17:43.839 ] 00:17:43.840 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.840 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:43.840 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:43.840 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:43.840 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:43.840 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.840 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:43.840 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:43.840 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:43.840 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:43.840 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.840 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.840 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.840 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.840 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.840 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.840 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.840 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.840 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.099 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.099 "name": "Existed_Raid", 00:17:44.099 "uuid": "5eea3994-e1c7-4f37-a479-5ce2afda352f", 00:17:44.099 "strip_size_kb": 64, 00:17:44.099 "state": "configuring", 00:17:44.099 "raid_level": "raid5f", 00:17:44.099 "superblock": true, 00:17:44.099 "num_base_bdevs": 4, 00:17:44.099 "num_base_bdevs_discovered": 3, 00:17:44.099 "num_base_bdevs_operational": 4, 00:17:44.099 "base_bdevs_list": [ 00:17:44.099 { 00:17:44.099 "name": "BaseBdev1", 00:17:44.099 "uuid": "842a9585-85f5-4ed7-bb2e-29bde8b0bbbc", 00:17:44.099 "is_configured": true, 00:17:44.099 "data_offset": 2048, 00:17:44.099 "data_size": 63488 00:17:44.099 }, 00:17:44.099 { 00:17:44.099 "name": "BaseBdev2", 00:17:44.099 "uuid": "12f18d9e-bd39-4dfb-91d8-420a45a26167", 00:17:44.099 "is_configured": true, 00:17:44.099 "data_offset": 2048, 00:17:44.099 "data_size": 63488 00:17:44.099 }, 00:17:44.099 { 00:17:44.099 "name": "BaseBdev3", 00:17:44.099 "uuid": "113b0fbe-a99c-4a62-b2ab-c05043eb2c78", 00:17:44.099 "is_configured": true, 00:17:44.099 "data_offset": 2048, 00:17:44.099 "data_size": 63488 00:17:44.099 }, 00:17:44.099 { 00:17:44.099 "name": "BaseBdev4", 00:17:44.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.099 "is_configured": false, 00:17:44.099 "data_offset": 0, 00:17:44.099 "data_size": 0 00:17:44.099 } 00:17:44.099 ] 00:17:44.099 }' 00:17:44.099 11:27:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.099 11:27:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.357 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:44.357 11:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.357 11:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.357 [2024-11-20 11:27:27.435154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:44.357 [2024-11-20 11:27:27.435448] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:44.357 [2024-11-20 11:27:27.435494] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:44.357 [2024-11-20 11:27:27.435765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:44.357 BaseBdev4 00:17:44.357 11:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.357 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:44.358 11:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:44.358 11:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:44.358 11:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:44.358 11:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:44.358 11:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:44.358 11:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:44.358 11:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.358 11:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.358 [2024-11-20 11:27:27.444174] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:44.358 [2024-11-20 11:27:27.444206] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:44.358 [2024-11-20 11:27:27.444503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.358 11:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.358 11:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:44.358 11:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.358 11:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.358 [ 00:17:44.358 { 00:17:44.358 "name": "BaseBdev4", 00:17:44.358 "aliases": [ 00:17:44.358 "8662fd15-482e-4333-8d0a-4101671d8947" 00:17:44.358 ], 00:17:44.358 "product_name": "Malloc disk", 00:17:44.358 "block_size": 512, 00:17:44.358 "num_blocks": 65536, 00:17:44.358 "uuid": "8662fd15-482e-4333-8d0a-4101671d8947", 00:17:44.358 "assigned_rate_limits": { 00:17:44.358 "rw_ios_per_sec": 0, 00:17:44.358 "rw_mbytes_per_sec": 0, 00:17:44.358 "r_mbytes_per_sec": 0, 00:17:44.358 "w_mbytes_per_sec": 0 00:17:44.358 }, 00:17:44.358 "claimed": true, 00:17:44.358 "claim_type": "exclusive_write", 00:17:44.358 "zoned": false, 00:17:44.358 "supported_io_types": { 00:17:44.358 "read": true, 00:17:44.358 "write": true, 00:17:44.358 "unmap": true, 00:17:44.358 "flush": true, 00:17:44.358 "reset": true, 00:17:44.358 "nvme_admin": false, 00:17:44.617 "nvme_io": false, 00:17:44.617 "nvme_io_md": false, 00:17:44.617 "write_zeroes": true, 00:17:44.617 "zcopy": true, 00:17:44.617 "get_zone_info": false, 00:17:44.617 "zone_management": false, 00:17:44.617 "zone_append": false, 00:17:44.617 "compare": false, 00:17:44.617 "compare_and_write": false, 00:17:44.617 "abort": true, 00:17:44.617 "seek_hole": false, 00:17:44.617 "seek_data": false, 00:17:44.617 "copy": true, 00:17:44.617 "nvme_iov_md": false 00:17:44.617 }, 00:17:44.617 "memory_domains": [ 00:17:44.617 { 00:17:44.617 "dma_device_id": "system", 00:17:44.617 "dma_device_type": 1 00:17:44.617 }, 00:17:44.617 { 00:17:44.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.617 "dma_device_type": 2 00:17:44.617 } 00:17:44.617 ], 00:17:44.617 "driver_specific": {} 00:17:44.617 } 00:17:44.617 ] 00:17:44.617 11:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.617 11:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:44.617 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:44.617 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:44.617 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:44.617 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.617 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.617 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.617 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.617 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:44.617 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.617 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.617 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.617 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.617 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.617 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.617 11:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.617 11:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.617 11:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.617 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.617 "name": "Existed_Raid", 00:17:44.617 "uuid": "5eea3994-e1c7-4f37-a479-5ce2afda352f", 00:17:44.617 "strip_size_kb": 64, 00:17:44.617 "state": "online", 00:17:44.617 "raid_level": "raid5f", 00:17:44.617 "superblock": true, 00:17:44.617 "num_base_bdevs": 4, 00:17:44.617 "num_base_bdevs_discovered": 4, 00:17:44.617 "num_base_bdevs_operational": 4, 00:17:44.617 "base_bdevs_list": [ 00:17:44.617 { 00:17:44.617 "name": "BaseBdev1", 00:17:44.617 "uuid": "842a9585-85f5-4ed7-bb2e-29bde8b0bbbc", 00:17:44.617 "is_configured": true, 00:17:44.617 "data_offset": 2048, 00:17:44.617 "data_size": 63488 00:17:44.617 }, 00:17:44.617 { 00:17:44.617 "name": "BaseBdev2", 00:17:44.617 "uuid": "12f18d9e-bd39-4dfb-91d8-420a45a26167", 00:17:44.617 "is_configured": true, 00:17:44.617 "data_offset": 2048, 00:17:44.617 "data_size": 63488 00:17:44.617 }, 00:17:44.617 { 00:17:44.617 "name": "BaseBdev3", 00:17:44.617 "uuid": "113b0fbe-a99c-4a62-b2ab-c05043eb2c78", 00:17:44.617 "is_configured": true, 00:17:44.617 "data_offset": 2048, 00:17:44.617 "data_size": 63488 00:17:44.617 }, 00:17:44.617 { 00:17:44.617 "name": "BaseBdev4", 00:17:44.617 "uuid": "8662fd15-482e-4333-8d0a-4101671d8947", 00:17:44.617 "is_configured": true, 00:17:44.617 "data_offset": 2048, 00:17:44.617 "data_size": 63488 00:17:44.617 } 00:17:44.617 ] 00:17:44.617 }' 00:17:44.617 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.617 11:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.876 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:44.876 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:44.876 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:44.876 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:44.876 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:44.876 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:44.876 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:44.876 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:44.876 11:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.876 11:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.876 [2024-11-20 11:27:27.920609] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:44.876 11:27:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.876 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:44.876 "name": "Existed_Raid", 00:17:44.876 "aliases": [ 00:17:44.876 "5eea3994-e1c7-4f37-a479-5ce2afda352f" 00:17:44.876 ], 00:17:44.876 "product_name": "Raid Volume", 00:17:44.876 "block_size": 512, 00:17:44.876 "num_blocks": 190464, 00:17:44.876 "uuid": "5eea3994-e1c7-4f37-a479-5ce2afda352f", 00:17:44.876 "assigned_rate_limits": { 00:17:44.876 "rw_ios_per_sec": 0, 00:17:44.876 "rw_mbytes_per_sec": 0, 00:17:44.876 "r_mbytes_per_sec": 0, 00:17:44.876 "w_mbytes_per_sec": 0 00:17:44.876 }, 00:17:44.876 "claimed": false, 00:17:44.876 "zoned": false, 00:17:44.876 "supported_io_types": { 00:17:44.876 "read": true, 00:17:44.876 "write": true, 00:17:44.876 "unmap": false, 00:17:44.876 "flush": false, 00:17:44.876 "reset": true, 00:17:44.876 "nvme_admin": false, 00:17:44.876 "nvme_io": false, 00:17:44.876 "nvme_io_md": false, 00:17:44.876 "write_zeroes": true, 00:17:44.876 "zcopy": false, 00:17:44.876 "get_zone_info": false, 00:17:44.876 "zone_management": false, 00:17:44.876 "zone_append": false, 00:17:44.876 "compare": false, 00:17:44.876 "compare_and_write": false, 00:17:44.876 "abort": false, 00:17:44.876 "seek_hole": false, 00:17:44.876 "seek_data": false, 00:17:44.876 "copy": false, 00:17:44.876 "nvme_iov_md": false 00:17:44.876 }, 00:17:44.876 "driver_specific": { 00:17:44.876 "raid": { 00:17:44.877 "uuid": "5eea3994-e1c7-4f37-a479-5ce2afda352f", 00:17:44.877 "strip_size_kb": 64, 00:17:44.877 "state": "online", 00:17:44.877 "raid_level": "raid5f", 00:17:44.877 "superblock": true, 00:17:44.877 "num_base_bdevs": 4, 00:17:44.877 "num_base_bdevs_discovered": 4, 00:17:44.877 "num_base_bdevs_operational": 4, 00:17:44.877 "base_bdevs_list": [ 00:17:44.877 { 00:17:44.877 "name": "BaseBdev1", 00:17:44.877 "uuid": "842a9585-85f5-4ed7-bb2e-29bde8b0bbbc", 00:17:44.877 "is_configured": true, 00:17:44.877 "data_offset": 2048, 00:17:44.877 "data_size": 63488 00:17:44.877 }, 00:17:44.877 { 00:17:44.877 "name": "BaseBdev2", 00:17:44.877 "uuid": "12f18d9e-bd39-4dfb-91d8-420a45a26167", 00:17:44.877 "is_configured": true, 00:17:44.877 "data_offset": 2048, 00:17:44.877 "data_size": 63488 00:17:44.877 }, 00:17:44.877 { 00:17:44.877 "name": "BaseBdev3", 00:17:44.877 "uuid": "113b0fbe-a99c-4a62-b2ab-c05043eb2c78", 00:17:44.877 "is_configured": true, 00:17:44.877 "data_offset": 2048, 00:17:44.877 "data_size": 63488 00:17:44.877 }, 00:17:44.877 { 00:17:44.877 "name": "BaseBdev4", 00:17:44.877 "uuid": "8662fd15-482e-4333-8d0a-4101671d8947", 00:17:44.877 "is_configured": true, 00:17:44.877 "data_offset": 2048, 00:17:44.877 "data_size": 63488 00:17:44.877 } 00:17:44.877 ] 00:17:44.877 } 00:17:44.877 } 00:17:44.877 }' 00:17:44.877 11:27:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:45.136 BaseBdev2 00:17:45.136 BaseBdev3 00:17:45.136 BaseBdev4' 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:45.136 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.395 [2024-11-20 11:27:28.271870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.395 "name": "Existed_Raid", 00:17:45.395 "uuid": "5eea3994-e1c7-4f37-a479-5ce2afda352f", 00:17:45.395 "strip_size_kb": 64, 00:17:45.395 "state": "online", 00:17:45.395 "raid_level": "raid5f", 00:17:45.395 "superblock": true, 00:17:45.395 "num_base_bdevs": 4, 00:17:45.395 "num_base_bdevs_discovered": 3, 00:17:45.395 "num_base_bdevs_operational": 3, 00:17:45.395 "base_bdevs_list": [ 00:17:45.395 { 00:17:45.395 "name": null, 00:17:45.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.395 "is_configured": false, 00:17:45.395 "data_offset": 0, 00:17:45.395 "data_size": 63488 00:17:45.395 }, 00:17:45.395 { 00:17:45.395 "name": "BaseBdev2", 00:17:45.395 "uuid": "12f18d9e-bd39-4dfb-91d8-420a45a26167", 00:17:45.395 "is_configured": true, 00:17:45.395 "data_offset": 2048, 00:17:45.395 "data_size": 63488 00:17:45.395 }, 00:17:45.395 { 00:17:45.395 "name": "BaseBdev3", 00:17:45.395 "uuid": "113b0fbe-a99c-4a62-b2ab-c05043eb2c78", 00:17:45.395 "is_configured": true, 00:17:45.395 "data_offset": 2048, 00:17:45.395 "data_size": 63488 00:17:45.395 }, 00:17:45.395 { 00:17:45.395 "name": "BaseBdev4", 00:17:45.395 "uuid": "8662fd15-482e-4333-8d0a-4101671d8947", 00:17:45.395 "is_configured": true, 00:17:45.395 "data_offset": 2048, 00:17:45.395 "data_size": 63488 00:17:45.395 } 00:17:45.395 ] 00:17:45.395 }' 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.395 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.964 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:45.964 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:45.964 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.964 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.964 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:45.964 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.964 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.964 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:45.964 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:45.964 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:45.964 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.964 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.964 [2024-11-20 11:27:28.867919] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:45.964 [2024-11-20 11:27:28.868095] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:45.964 [2024-11-20 11:27:28.965329] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:45.964 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.964 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:45.964 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:45.964 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.964 11:27:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:45.964 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.964 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.964 11:27:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.964 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:45.964 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:45.964 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:45.964 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.964 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.964 [2024-11-20 11:27:29.021259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.297 [2024-11-20 11:27:29.173367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:46.297 [2024-11-20 11:27:29.173424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.297 BaseBdev2 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.297 [ 00:17:46.297 { 00:17:46.297 "name": "BaseBdev2", 00:17:46.297 "aliases": [ 00:17:46.297 "44a0469b-076e-45f0-a0a3-68596d5e2f10" 00:17:46.297 ], 00:17:46.297 "product_name": "Malloc disk", 00:17:46.297 "block_size": 512, 00:17:46.297 "num_blocks": 65536, 00:17:46.297 "uuid": "44a0469b-076e-45f0-a0a3-68596d5e2f10", 00:17:46.297 "assigned_rate_limits": { 00:17:46.297 "rw_ios_per_sec": 0, 00:17:46.297 "rw_mbytes_per_sec": 0, 00:17:46.297 "r_mbytes_per_sec": 0, 00:17:46.297 "w_mbytes_per_sec": 0 00:17:46.297 }, 00:17:46.297 "claimed": false, 00:17:46.297 "zoned": false, 00:17:46.297 "supported_io_types": { 00:17:46.297 "read": true, 00:17:46.297 "write": true, 00:17:46.297 "unmap": true, 00:17:46.297 "flush": true, 00:17:46.297 "reset": true, 00:17:46.297 "nvme_admin": false, 00:17:46.297 "nvme_io": false, 00:17:46.297 "nvme_io_md": false, 00:17:46.297 "write_zeroes": true, 00:17:46.297 "zcopy": true, 00:17:46.297 "get_zone_info": false, 00:17:46.297 "zone_management": false, 00:17:46.297 "zone_append": false, 00:17:46.297 "compare": false, 00:17:46.297 "compare_and_write": false, 00:17:46.297 "abort": true, 00:17:46.297 "seek_hole": false, 00:17:46.297 "seek_data": false, 00:17:46.297 "copy": true, 00:17:46.297 "nvme_iov_md": false 00:17:46.297 }, 00:17:46.297 "memory_domains": [ 00:17:46.297 { 00:17:46.297 "dma_device_id": "system", 00:17:46.297 "dma_device_type": 1 00:17:46.297 }, 00:17:46.297 { 00:17:46.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.297 "dma_device_type": 2 00:17:46.297 } 00:17:46.297 ], 00:17:46.297 "driver_specific": {} 00:17:46.297 } 00:17:46.297 ] 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.297 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.556 BaseBdev3 00:17:46.556 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.556 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:46.556 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:46.556 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.557 [ 00:17:46.557 { 00:17:46.557 "name": "BaseBdev3", 00:17:46.557 "aliases": [ 00:17:46.557 "0e4effda-fb25-464e-a6a1-c1df0c799e43" 00:17:46.557 ], 00:17:46.557 "product_name": "Malloc disk", 00:17:46.557 "block_size": 512, 00:17:46.557 "num_blocks": 65536, 00:17:46.557 "uuid": "0e4effda-fb25-464e-a6a1-c1df0c799e43", 00:17:46.557 "assigned_rate_limits": { 00:17:46.557 "rw_ios_per_sec": 0, 00:17:46.557 "rw_mbytes_per_sec": 0, 00:17:46.557 "r_mbytes_per_sec": 0, 00:17:46.557 "w_mbytes_per_sec": 0 00:17:46.557 }, 00:17:46.557 "claimed": false, 00:17:46.557 "zoned": false, 00:17:46.557 "supported_io_types": { 00:17:46.557 "read": true, 00:17:46.557 "write": true, 00:17:46.557 "unmap": true, 00:17:46.557 "flush": true, 00:17:46.557 "reset": true, 00:17:46.557 "nvme_admin": false, 00:17:46.557 "nvme_io": false, 00:17:46.557 "nvme_io_md": false, 00:17:46.557 "write_zeroes": true, 00:17:46.557 "zcopy": true, 00:17:46.557 "get_zone_info": false, 00:17:46.557 "zone_management": false, 00:17:46.557 "zone_append": false, 00:17:46.557 "compare": false, 00:17:46.557 "compare_and_write": false, 00:17:46.557 "abort": true, 00:17:46.557 "seek_hole": false, 00:17:46.557 "seek_data": false, 00:17:46.557 "copy": true, 00:17:46.557 "nvme_iov_md": false 00:17:46.557 }, 00:17:46.557 "memory_domains": [ 00:17:46.557 { 00:17:46.557 "dma_device_id": "system", 00:17:46.557 "dma_device_type": 1 00:17:46.557 }, 00:17:46.557 { 00:17:46.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.557 "dma_device_type": 2 00:17:46.557 } 00:17:46.557 ], 00:17:46.557 "driver_specific": {} 00:17:46.557 } 00:17:46.557 ] 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.557 BaseBdev4 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.557 [ 00:17:46.557 { 00:17:46.557 "name": "BaseBdev4", 00:17:46.557 "aliases": [ 00:17:46.557 "f1cbe863-5b17-4494-8030-503084bd4ba5" 00:17:46.557 ], 00:17:46.557 "product_name": "Malloc disk", 00:17:46.557 "block_size": 512, 00:17:46.557 "num_blocks": 65536, 00:17:46.557 "uuid": "f1cbe863-5b17-4494-8030-503084bd4ba5", 00:17:46.557 "assigned_rate_limits": { 00:17:46.557 "rw_ios_per_sec": 0, 00:17:46.557 "rw_mbytes_per_sec": 0, 00:17:46.557 "r_mbytes_per_sec": 0, 00:17:46.557 "w_mbytes_per_sec": 0 00:17:46.557 }, 00:17:46.557 "claimed": false, 00:17:46.557 "zoned": false, 00:17:46.557 "supported_io_types": { 00:17:46.557 "read": true, 00:17:46.557 "write": true, 00:17:46.557 "unmap": true, 00:17:46.557 "flush": true, 00:17:46.557 "reset": true, 00:17:46.557 "nvme_admin": false, 00:17:46.557 "nvme_io": false, 00:17:46.557 "nvme_io_md": false, 00:17:46.557 "write_zeroes": true, 00:17:46.557 "zcopy": true, 00:17:46.557 "get_zone_info": false, 00:17:46.557 "zone_management": false, 00:17:46.557 "zone_append": false, 00:17:46.557 "compare": false, 00:17:46.557 "compare_and_write": false, 00:17:46.557 "abort": true, 00:17:46.557 "seek_hole": false, 00:17:46.557 "seek_data": false, 00:17:46.557 "copy": true, 00:17:46.557 "nvme_iov_md": false 00:17:46.557 }, 00:17:46.557 "memory_domains": [ 00:17:46.557 { 00:17:46.557 "dma_device_id": "system", 00:17:46.557 "dma_device_type": 1 00:17:46.557 }, 00:17:46.557 { 00:17:46.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.557 "dma_device_type": 2 00:17:46.557 } 00:17:46.557 ], 00:17:46.557 "driver_specific": {} 00:17:46.557 } 00:17:46.557 ] 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.557 [2024-11-20 11:27:29.537015] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:46.557 [2024-11-20 11:27:29.537062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:46.557 [2024-11-20 11:27:29.537089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:46.557 [2024-11-20 11:27:29.539002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:46.557 [2024-11-20 11:27:29.539058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.557 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.557 "name": "Existed_Raid", 00:17:46.557 "uuid": "d5fb94f1-583a-42aa-b7e1-d7b4ce130d80", 00:17:46.557 "strip_size_kb": 64, 00:17:46.557 "state": "configuring", 00:17:46.557 "raid_level": "raid5f", 00:17:46.557 "superblock": true, 00:17:46.557 "num_base_bdevs": 4, 00:17:46.557 "num_base_bdevs_discovered": 3, 00:17:46.557 "num_base_bdevs_operational": 4, 00:17:46.557 "base_bdevs_list": [ 00:17:46.557 { 00:17:46.557 "name": "BaseBdev1", 00:17:46.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.558 "is_configured": false, 00:17:46.558 "data_offset": 0, 00:17:46.558 "data_size": 0 00:17:46.558 }, 00:17:46.558 { 00:17:46.558 "name": "BaseBdev2", 00:17:46.558 "uuid": "44a0469b-076e-45f0-a0a3-68596d5e2f10", 00:17:46.558 "is_configured": true, 00:17:46.558 "data_offset": 2048, 00:17:46.558 "data_size": 63488 00:17:46.558 }, 00:17:46.558 { 00:17:46.558 "name": "BaseBdev3", 00:17:46.558 "uuid": "0e4effda-fb25-464e-a6a1-c1df0c799e43", 00:17:46.558 "is_configured": true, 00:17:46.558 "data_offset": 2048, 00:17:46.558 "data_size": 63488 00:17:46.558 }, 00:17:46.558 { 00:17:46.558 "name": "BaseBdev4", 00:17:46.558 "uuid": "f1cbe863-5b17-4494-8030-503084bd4ba5", 00:17:46.558 "is_configured": true, 00:17:46.558 "data_offset": 2048, 00:17:46.558 "data_size": 63488 00:17:46.558 } 00:17:46.558 ] 00:17:46.558 }' 00:17:46.558 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.558 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.126 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:47.126 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.126 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.126 [2024-11-20 11:27:29.980280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:47.126 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.126 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:47.126 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:47.126 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:47.126 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:47.126 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.126 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:47.126 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.126 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.126 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.126 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.126 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.126 11:27:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.126 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.126 11:27:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.126 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.126 11:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.126 "name": "Existed_Raid", 00:17:47.126 "uuid": "d5fb94f1-583a-42aa-b7e1-d7b4ce130d80", 00:17:47.126 "strip_size_kb": 64, 00:17:47.126 "state": "configuring", 00:17:47.126 "raid_level": "raid5f", 00:17:47.126 "superblock": true, 00:17:47.126 "num_base_bdevs": 4, 00:17:47.126 "num_base_bdevs_discovered": 2, 00:17:47.126 "num_base_bdevs_operational": 4, 00:17:47.126 "base_bdevs_list": [ 00:17:47.126 { 00:17:47.126 "name": "BaseBdev1", 00:17:47.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.126 "is_configured": false, 00:17:47.126 "data_offset": 0, 00:17:47.126 "data_size": 0 00:17:47.126 }, 00:17:47.126 { 00:17:47.126 "name": null, 00:17:47.126 "uuid": "44a0469b-076e-45f0-a0a3-68596d5e2f10", 00:17:47.126 "is_configured": false, 00:17:47.126 "data_offset": 0, 00:17:47.126 "data_size": 63488 00:17:47.126 }, 00:17:47.126 { 00:17:47.126 "name": "BaseBdev3", 00:17:47.126 "uuid": "0e4effda-fb25-464e-a6a1-c1df0c799e43", 00:17:47.126 "is_configured": true, 00:17:47.126 "data_offset": 2048, 00:17:47.126 "data_size": 63488 00:17:47.126 }, 00:17:47.126 { 00:17:47.126 "name": "BaseBdev4", 00:17:47.126 "uuid": "f1cbe863-5b17-4494-8030-503084bd4ba5", 00:17:47.126 "is_configured": true, 00:17:47.126 "data_offset": 2048, 00:17:47.126 "data_size": 63488 00:17:47.126 } 00:17:47.126 ] 00:17:47.126 }' 00:17:47.126 11:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.126 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.386 11:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:47.386 11:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.386 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.386 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.386 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.646 [2024-11-20 11:27:30.550339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:47.646 BaseBdev1 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.646 [ 00:17:47.646 { 00:17:47.646 "name": "BaseBdev1", 00:17:47.646 "aliases": [ 00:17:47.646 "a69106be-db40-4cd6-a50f-4cdb56f80159" 00:17:47.646 ], 00:17:47.646 "product_name": "Malloc disk", 00:17:47.646 "block_size": 512, 00:17:47.646 "num_blocks": 65536, 00:17:47.646 "uuid": "a69106be-db40-4cd6-a50f-4cdb56f80159", 00:17:47.646 "assigned_rate_limits": { 00:17:47.646 "rw_ios_per_sec": 0, 00:17:47.646 "rw_mbytes_per_sec": 0, 00:17:47.646 "r_mbytes_per_sec": 0, 00:17:47.646 "w_mbytes_per_sec": 0 00:17:47.646 }, 00:17:47.646 "claimed": true, 00:17:47.646 "claim_type": "exclusive_write", 00:17:47.646 "zoned": false, 00:17:47.646 "supported_io_types": { 00:17:47.646 "read": true, 00:17:47.646 "write": true, 00:17:47.646 "unmap": true, 00:17:47.646 "flush": true, 00:17:47.646 "reset": true, 00:17:47.646 "nvme_admin": false, 00:17:47.646 "nvme_io": false, 00:17:47.646 "nvme_io_md": false, 00:17:47.646 "write_zeroes": true, 00:17:47.646 "zcopy": true, 00:17:47.646 "get_zone_info": false, 00:17:47.646 "zone_management": false, 00:17:47.646 "zone_append": false, 00:17:47.646 "compare": false, 00:17:47.646 "compare_and_write": false, 00:17:47.646 "abort": true, 00:17:47.646 "seek_hole": false, 00:17:47.646 "seek_data": false, 00:17:47.646 "copy": true, 00:17:47.646 "nvme_iov_md": false 00:17:47.646 }, 00:17:47.646 "memory_domains": [ 00:17:47.646 { 00:17:47.646 "dma_device_id": "system", 00:17:47.646 "dma_device_type": 1 00:17:47.646 }, 00:17:47.646 { 00:17:47.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.646 "dma_device_type": 2 00:17:47.646 } 00:17:47.646 ], 00:17:47.646 "driver_specific": {} 00:17:47.646 } 00:17:47.646 ] 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.646 "name": "Existed_Raid", 00:17:47.646 "uuid": "d5fb94f1-583a-42aa-b7e1-d7b4ce130d80", 00:17:47.646 "strip_size_kb": 64, 00:17:47.646 "state": "configuring", 00:17:47.646 "raid_level": "raid5f", 00:17:47.646 "superblock": true, 00:17:47.646 "num_base_bdevs": 4, 00:17:47.646 "num_base_bdevs_discovered": 3, 00:17:47.646 "num_base_bdevs_operational": 4, 00:17:47.646 "base_bdevs_list": [ 00:17:47.646 { 00:17:47.646 "name": "BaseBdev1", 00:17:47.646 "uuid": "a69106be-db40-4cd6-a50f-4cdb56f80159", 00:17:47.646 "is_configured": true, 00:17:47.646 "data_offset": 2048, 00:17:47.646 "data_size": 63488 00:17:47.646 }, 00:17:47.646 { 00:17:47.646 "name": null, 00:17:47.646 "uuid": "44a0469b-076e-45f0-a0a3-68596d5e2f10", 00:17:47.646 "is_configured": false, 00:17:47.646 "data_offset": 0, 00:17:47.646 "data_size": 63488 00:17:47.646 }, 00:17:47.646 { 00:17:47.646 "name": "BaseBdev3", 00:17:47.646 "uuid": "0e4effda-fb25-464e-a6a1-c1df0c799e43", 00:17:47.646 "is_configured": true, 00:17:47.646 "data_offset": 2048, 00:17:47.646 "data_size": 63488 00:17:47.646 }, 00:17:47.646 { 00:17:47.646 "name": "BaseBdev4", 00:17:47.646 "uuid": "f1cbe863-5b17-4494-8030-503084bd4ba5", 00:17:47.646 "is_configured": true, 00:17:47.646 "data_offset": 2048, 00:17:47.646 "data_size": 63488 00:17:47.646 } 00:17:47.646 ] 00:17:47.646 }' 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.646 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.905 11:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.906 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.906 11:27:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.906 11:27:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:47.906 11:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.164 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:48.164 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:48.164 11:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.164 11:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.164 [2024-11-20 11:27:31.045595] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:48.164 11:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.164 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:48.164 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:48.165 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:48.165 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.165 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.165 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:48.165 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.165 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.165 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.165 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.165 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.165 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.165 11:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.165 11:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.165 11:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.165 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.165 "name": "Existed_Raid", 00:17:48.165 "uuid": "d5fb94f1-583a-42aa-b7e1-d7b4ce130d80", 00:17:48.165 "strip_size_kb": 64, 00:17:48.165 "state": "configuring", 00:17:48.165 "raid_level": "raid5f", 00:17:48.165 "superblock": true, 00:17:48.165 "num_base_bdevs": 4, 00:17:48.165 "num_base_bdevs_discovered": 2, 00:17:48.165 "num_base_bdevs_operational": 4, 00:17:48.165 "base_bdevs_list": [ 00:17:48.165 { 00:17:48.165 "name": "BaseBdev1", 00:17:48.165 "uuid": "a69106be-db40-4cd6-a50f-4cdb56f80159", 00:17:48.165 "is_configured": true, 00:17:48.165 "data_offset": 2048, 00:17:48.165 "data_size": 63488 00:17:48.165 }, 00:17:48.165 { 00:17:48.165 "name": null, 00:17:48.165 "uuid": "44a0469b-076e-45f0-a0a3-68596d5e2f10", 00:17:48.165 "is_configured": false, 00:17:48.165 "data_offset": 0, 00:17:48.165 "data_size": 63488 00:17:48.165 }, 00:17:48.165 { 00:17:48.165 "name": null, 00:17:48.165 "uuid": "0e4effda-fb25-464e-a6a1-c1df0c799e43", 00:17:48.165 "is_configured": false, 00:17:48.165 "data_offset": 0, 00:17:48.165 "data_size": 63488 00:17:48.165 }, 00:17:48.165 { 00:17:48.165 "name": "BaseBdev4", 00:17:48.165 "uuid": "f1cbe863-5b17-4494-8030-503084bd4ba5", 00:17:48.165 "is_configured": true, 00:17:48.165 "data_offset": 2048, 00:17:48.165 "data_size": 63488 00:17:48.165 } 00:17:48.165 ] 00:17:48.165 }' 00:17:48.165 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.165 11:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.424 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:48.424 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.424 11:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.424 11:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.683 11:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.683 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:48.683 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:48.683 11:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.683 11:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.683 [2024-11-20 11:27:31.572744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:48.683 11:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.683 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:48.683 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:48.683 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:48.683 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.683 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.683 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:48.683 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.683 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.683 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.683 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.683 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.683 11:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.683 11:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.683 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.683 11:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.683 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.683 "name": "Existed_Raid", 00:17:48.683 "uuid": "d5fb94f1-583a-42aa-b7e1-d7b4ce130d80", 00:17:48.683 "strip_size_kb": 64, 00:17:48.683 "state": "configuring", 00:17:48.683 "raid_level": "raid5f", 00:17:48.683 "superblock": true, 00:17:48.683 "num_base_bdevs": 4, 00:17:48.683 "num_base_bdevs_discovered": 3, 00:17:48.683 "num_base_bdevs_operational": 4, 00:17:48.683 "base_bdevs_list": [ 00:17:48.683 { 00:17:48.683 "name": "BaseBdev1", 00:17:48.683 "uuid": "a69106be-db40-4cd6-a50f-4cdb56f80159", 00:17:48.683 "is_configured": true, 00:17:48.683 "data_offset": 2048, 00:17:48.683 "data_size": 63488 00:17:48.683 }, 00:17:48.683 { 00:17:48.683 "name": null, 00:17:48.683 "uuid": "44a0469b-076e-45f0-a0a3-68596d5e2f10", 00:17:48.683 "is_configured": false, 00:17:48.683 "data_offset": 0, 00:17:48.683 "data_size": 63488 00:17:48.683 }, 00:17:48.683 { 00:17:48.683 "name": "BaseBdev3", 00:17:48.683 "uuid": "0e4effda-fb25-464e-a6a1-c1df0c799e43", 00:17:48.683 "is_configured": true, 00:17:48.683 "data_offset": 2048, 00:17:48.683 "data_size": 63488 00:17:48.683 }, 00:17:48.683 { 00:17:48.683 "name": "BaseBdev4", 00:17:48.683 "uuid": "f1cbe863-5b17-4494-8030-503084bd4ba5", 00:17:48.683 "is_configured": true, 00:17:48.683 "data_offset": 2048, 00:17:48.683 "data_size": 63488 00:17:48.683 } 00:17:48.683 ] 00:17:48.683 }' 00:17:48.683 11:27:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.684 11:27:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.962 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.962 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:48.962 11:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.962 11:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.962 11:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.221 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:49.221 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:49.221 11:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.221 11:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.221 [2024-11-20 11:27:32.103945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:49.221 11:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.221 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:49.221 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.221 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.221 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.221 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.221 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:49.221 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.221 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.221 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.221 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.221 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.221 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.221 11:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.221 11:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.221 11:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.221 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.221 "name": "Existed_Raid", 00:17:49.221 "uuid": "d5fb94f1-583a-42aa-b7e1-d7b4ce130d80", 00:17:49.221 "strip_size_kb": 64, 00:17:49.221 "state": "configuring", 00:17:49.221 "raid_level": "raid5f", 00:17:49.221 "superblock": true, 00:17:49.221 "num_base_bdevs": 4, 00:17:49.221 "num_base_bdevs_discovered": 2, 00:17:49.221 "num_base_bdevs_operational": 4, 00:17:49.221 "base_bdevs_list": [ 00:17:49.221 { 00:17:49.221 "name": null, 00:17:49.221 "uuid": "a69106be-db40-4cd6-a50f-4cdb56f80159", 00:17:49.221 "is_configured": false, 00:17:49.221 "data_offset": 0, 00:17:49.221 "data_size": 63488 00:17:49.221 }, 00:17:49.221 { 00:17:49.221 "name": null, 00:17:49.221 "uuid": "44a0469b-076e-45f0-a0a3-68596d5e2f10", 00:17:49.221 "is_configured": false, 00:17:49.221 "data_offset": 0, 00:17:49.221 "data_size": 63488 00:17:49.221 }, 00:17:49.221 { 00:17:49.221 "name": "BaseBdev3", 00:17:49.221 "uuid": "0e4effda-fb25-464e-a6a1-c1df0c799e43", 00:17:49.221 "is_configured": true, 00:17:49.221 "data_offset": 2048, 00:17:49.221 "data_size": 63488 00:17:49.221 }, 00:17:49.221 { 00:17:49.221 "name": "BaseBdev4", 00:17:49.221 "uuid": "f1cbe863-5b17-4494-8030-503084bd4ba5", 00:17:49.221 "is_configured": true, 00:17:49.221 "data_offset": 2048, 00:17:49.221 "data_size": 63488 00:17:49.221 } 00:17:49.221 ] 00:17:49.221 }' 00:17:49.221 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.221 11:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.788 [2024-11-20 11:27:32.702239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.788 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.788 "name": "Existed_Raid", 00:17:49.788 "uuid": "d5fb94f1-583a-42aa-b7e1-d7b4ce130d80", 00:17:49.788 "strip_size_kb": 64, 00:17:49.788 "state": "configuring", 00:17:49.788 "raid_level": "raid5f", 00:17:49.788 "superblock": true, 00:17:49.788 "num_base_bdevs": 4, 00:17:49.788 "num_base_bdevs_discovered": 3, 00:17:49.788 "num_base_bdevs_operational": 4, 00:17:49.788 "base_bdevs_list": [ 00:17:49.788 { 00:17:49.788 "name": null, 00:17:49.788 "uuid": "a69106be-db40-4cd6-a50f-4cdb56f80159", 00:17:49.788 "is_configured": false, 00:17:49.788 "data_offset": 0, 00:17:49.788 "data_size": 63488 00:17:49.788 }, 00:17:49.788 { 00:17:49.788 "name": "BaseBdev2", 00:17:49.788 "uuid": "44a0469b-076e-45f0-a0a3-68596d5e2f10", 00:17:49.788 "is_configured": true, 00:17:49.788 "data_offset": 2048, 00:17:49.788 "data_size": 63488 00:17:49.788 }, 00:17:49.788 { 00:17:49.788 "name": "BaseBdev3", 00:17:49.788 "uuid": "0e4effda-fb25-464e-a6a1-c1df0c799e43", 00:17:49.788 "is_configured": true, 00:17:49.788 "data_offset": 2048, 00:17:49.788 "data_size": 63488 00:17:49.788 }, 00:17:49.788 { 00:17:49.788 "name": "BaseBdev4", 00:17:49.788 "uuid": "f1cbe863-5b17-4494-8030-503084bd4ba5", 00:17:49.788 "is_configured": true, 00:17:49.788 "data_offset": 2048, 00:17:49.788 "data_size": 63488 00:17:49.788 } 00:17:49.788 ] 00:17:49.789 }' 00:17:49.789 11:27:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.789 11:27:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.045 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.045 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.045 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.045 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:50.045 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a69106be-db40-4cd6-a50f-4cdb56f80159 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.302 [2024-11-20 11:27:33.258260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:50.302 [2024-11-20 11:27:33.258533] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:50.302 [2024-11-20 11:27:33.258546] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:50.302 [2024-11-20 11:27:33.258788] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:50.302 NewBaseBdev 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.302 [2024-11-20 11:27:33.266130] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:50.302 [2024-11-20 11:27:33.266149] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:50.302 [2024-11-20 11:27:33.266309] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.302 [ 00:17:50.302 { 00:17:50.302 "name": "NewBaseBdev", 00:17:50.302 "aliases": [ 00:17:50.302 "a69106be-db40-4cd6-a50f-4cdb56f80159" 00:17:50.302 ], 00:17:50.302 "product_name": "Malloc disk", 00:17:50.302 "block_size": 512, 00:17:50.302 "num_blocks": 65536, 00:17:50.302 "uuid": "a69106be-db40-4cd6-a50f-4cdb56f80159", 00:17:50.302 "assigned_rate_limits": { 00:17:50.302 "rw_ios_per_sec": 0, 00:17:50.302 "rw_mbytes_per_sec": 0, 00:17:50.302 "r_mbytes_per_sec": 0, 00:17:50.302 "w_mbytes_per_sec": 0 00:17:50.302 }, 00:17:50.302 "claimed": true, 00:17:50.302 "claim_type": "exclusive_write", 00:17:50.302 "zoned": false, 00:17:50.302 "supported_io_types": { 00:17:50.302 "read": true, 00:17:50.302 "write": true, 00:17:50.302 "unmap": true, 00:17:50.302 "flush": true, 00:17:50.302 "reset": true, 00:17:50.302 "nvme_admin": false, 00:17:50.302 "nvme_io": false, 00:17:50.302 "nvme_io_md": false, 00:17:50.302 "write_zeroes": true, 00:17:50.302 "zcopy": true, 00:17:50.302 "get_zone_info": false, 00:17:50.302 "zone_management": false, 00:17:50.302 "zone_append": false, 00:17:50.302 "compare": false, 00:17:50.302 "compare_and_write": false, 00:17:50.302 "abort": true, 00:17:50.302 "seek_hole": false, 00:17:50.302 "seek_data": false, 00:17:50.302 "copy": true, 00:17:50.302 "nvme_iov_md": false 00:17:50.302 }, 00:17:50.302 "memory_domains": [ 00:17:50.302 { 00:17:50.302 "dma_device_id": "system", 00:17:50.302 "dma_device_type": 1 00:17:50.302 }, 00:17:50.302 { 00:17:50.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.302 "dma_device_type": 2 00:17:50.302 } 00:17:50.302 ], 00:17:50.302 "driver_specific": {} 00:17:50.302 } 00:17:50.302 ] 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.302 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.303 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.303 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.303 "name": "Existed_Raid", 00:17:50.303 "uuid": "d5fb94f1-583a-42aa-b7e1-d7b4ce130d80", 00:17:50.303 "strip_size_kb": 64, 00:17:50.303 "state": "online", 00:17:50.303 "raid_level": "raid5f", 00:17:50.303 "superblock": true, 00:17:50.303 "num_base_bdevs": 4, 00:17:50.303 "num_base_bdevs_discovered": 4, 00:17:50.303 "num_base_bdevs_operational": 4, 00:17:50.303 "base_bdevs_list": [ 00:17:50.303 { 00:17:50.303 "name": "NewBaseBdev", 00:17:50.303 "uuid": "a69106be-db40-4cd6-a50f-4cdb56f80159", 00:17:50.303 "is_configured": true, 00:17:50.303 "data_offset": 2048, 00:17:50.303 "data_size": 63488 00:17:50.303 }, 00:17:50.303 { 00:17:50.303 "name": "BaseBdev2", 00:17:50.303 "uuid": "44a0469b-076e-45f0-a0a3-68596d5e2f10", 00:17:50.303 "is_configured": true, 00:17:50.303 "data_offset": 2048, 00:17:50.303 "data_size": 63488 00:17:50.303 }, 00:17:50.303 { 00:17:50.303 "name": "BaseBdev3", 00:17:50.303 "uuid": "0e4effda-fb25-464e-a6a1-c1df0c799e43", 00:17:50.303 "is_configured": true, 00:17:50.303 "data_offset": 2048, 00:17:50.303 "data_size": 63488 00:17:50.303 }, 00:17:50.303 { 00:17:50.303 "name": "BaseBdev4", 00:17:50.303 "uuid": "f1cbe863-5b17-4494-8030-503084bd4ba5", 00:17:50.303 "is_configured": true, 00:17:50.303 "data_offset": 2048, 00:17:50.303 "data_size": 63488 00:17:50.303 } 00:17:50.303 ] 00:17:50.303 }' 00:17:50.303 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.303 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:50.867 [2024-11-20 11:27:33.754032] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:50.867 "name": "Existed_Raid", 00:17:50.867 "aliases": [ 00:17:50.867 "d5fb94f1-583a-42aa-b7e1-d7b4ce130d80" 00:17:50.867 ], 00:17:50.867 "product_name": "Raid Volume", 00:17:50.867 "block_size": 512, 00:17:50.867 "num_blocks": 190464, 00:17:50.867 "uuid": "d5fb94f1-583a-42aa-b7e1-d7b4ce130d80", 00:17:50.867 "assigned_rate_limits": { 00:17:50.867 "rw_ios_per_sec": 0, 00:17:50.867 "rw_mbytes_per_sec": 0, 00:17:50.867 "r_mbytes_per_sec": 0, 00:17:50.867 "w_mbytes_per_sec": 0 00:17:50.867 }, 00:17:50.867 "claimed": false, 00:17:50.867 "zoned": false, 00:17:50.867 "supported_io_types": { 00:17:50.867 "read": true, 00:17:50.867 "write": true, 00:17:50.867 "unmap": false, 00:17:50.867 "flush": false, 00:17:50.867 "reset": true, 00:17:50.867 "nvme_admin": false, 00:17:50.867 "nvme_io": false, 00:17:50.867 "nvme_io_md": false, 00:17:50.867 "write_zeroes": true, 00:17:50.867 "zcopy": false, 00:17:50.867 "get_zone_info": false, 00:17:50.867 "zone_management": false, 00:17:50.867 "zone_append": false, 00:17:50.867 "compare": false, 00:17:50.867 "compare_and_write": false, 00:17:50.867 "abort": false, 00:17:50.867 "seek_hole": false, 00:17:50.867 "seek_data": false, 00:17:50.867 "copy": false, 00:17:50.867 "nvme_iov_md": false 00:17:50.867 }, 00:17:50.867 "driver_specific": { 00:17:50.867 "raid": { 00:17:50.867 "uuid": "d5fb94f1-583a-42aa-b7e1-d7b4ce130d80", 00:17:50.867 "strip_size_kb": 64, 00:17:50.867 "state": "online", 00:17:50.867 "raid_level": "raid5f", 00:17:50.867 "superblock": true, 00:17:50.867 "num_base_bdevs": 4, 00:17:50.867 "num_base_bdevs_discovered": 4, 00:17:50.867 "num_base_bdevs_operational": 4, 00:17:50.867 "base_bdevs_list": [ 00:17:50.867 { 00:17:50.867 "name": "NewBaseBdev", 00:17:50.867 "uuid": "a69106be-db40-4cd6-a50f-4cdb56f80159", 00:17:50.867 "is_configured": true, 00:17:50.867 "data_offset": 2048, 00:17:50.867 "data_size": 63488 00:17:50.867 }, 00:17:50.867 { 00:17:50.867 "name": "BaseBdev2", 00:17:50.867 "uuid": "44a0469b-076e-45f0-a0a3-68596d5e2f10", 00:17:50.867 "is_configured": true, 00:17:50.867 "data_offset": 2048, 00:17:50.867 "data_size": 63488 00:17:50.867 }, 00:17:50.867 { 00:17:50.867 "name": "BaseBdev3", 00:17:50.867 "uuid": "0e4effda-fb25-464e-a6a1-c1df0c799e43", 00:17:50.867 "is_configured": true, 00:17:50.867 "data_offset": 2048, 00:17:50.867 "data_size": 63488 00:17:50.867 }, 00:17:50.867 { 00:17:50.867 "name": "BaseBdev4", 00:17:50.867 "uuid": "f1cbe863-5b17-4494-8030-503084bd4ba5", 00:17:50.867 "is_configured": true, 00:17:50.867 "data_offset": 2048, 00:17:50.867 "data_size": 63488 00:17:50.867 } 00:17:50.867 ] 00:17:50.867 } 00:17:50.867 } 00:17:50.867 }' 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:50.867 BaseBdev2 00:17:50.867 BaseBdev3 00:17:50.867 BaseBdev4' 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.867 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.126 11:27:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.126 11:27:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:51.126 11:27:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:51.126 11:27:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:51.126 11:27:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:51.126 11:27:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:51.126 11:27:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.126 11:27:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.126 11:27:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.126 11:27:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:51.126 11:27:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:51.126 11:27:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:51.126 11:27:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.126 11:27:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.126 [2024-11-20 11:27:34.061190] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:51.126 [2024-11-20 11:27:34.061247] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:51.126 [2024-11-20 11:27:34.061380] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:51.126 [2024-11-20 11:27:34.061786] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:51.126 [2024-11-20 11:27:34.061808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:51.126 11:27:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.126 11:27:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83658 00:17:51.126 11:27:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83658 ']' 00:17:51.126 11:27:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83658 00:17:51.126 11:27:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:51.126 11:27:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.126 11:27:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83658 00:17:51.126 killing process with pid 83658 00:17:51.126 11:27:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:51.126 11:27:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:51.126 11:27:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83658' 00:17:51.126 11:27:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83658 00:17:51.126 [2024-11-20 11:27:34.100041] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:51.126 11:27:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83658 00:17:51.694 [2024-11-20 11:27:34.573940] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:52.629 11:27:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:52.629 00:17:52.629 real 0m11.845s 00:17:52.629 user 0m18.836s 00:17:52.629 sys 0m2.041s 00:17:52.629 11:27:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.629 11:27:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.629 ************************************ 00:17:52.629 END TEST raid5f_state_function_test_sb 00:17:52.629 ************************************ 00:17:52.886 11:27:35 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:17:52.886 11:27:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:52.886 11:27:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.886 11:27:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:52.886 ************************************ 00:17:52.886 START TEST raid5f_superblock_test 00:17:52.886 ************************************ 00:17:52.886 11:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:17:52.886 11:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:52.886 11:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:52.886 11:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:52.886 11:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:52.886 11:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:52.886 11:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:52.886 11:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:52.886 11:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:52.886 11:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:52.887 11:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:52.887 11:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:52.887 11:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:52.887 11:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:52.887 11:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:52.887 11:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:52.887 11:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:52.887 11:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84326 00:17:52.887 11:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:52.887 11:27:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84326 00:17:52.887 11:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84326 ']' 00:17:52.887 11:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.887 11:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.887 11:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.887 11:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.887 11:27:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.887 [2024-11-20 11:27:35.900906] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:17:52.887 [2024-11-20 11:27:35.901047] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84326 ] 00:17:53.144 [2024-11-20 11:27:36.076045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.144 [2024-11-20 11:27:36.204184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.402 [2024-11-20 11:27:36.444892] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:53.402 [2024-11-20 11:27:36.444934] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.968 malloc1 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.968 [2024-11-20 11:27:36.858515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:53.968 [2024-11-20 11:27:36.858588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.968 [2024-11-20 11:27:36.858615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:53.968 [2024-11-20 11:27:36.858625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.968 [2024-11-20 11:27:36.861012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.968 [2024-11-20 11:27:36.861051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:53.968 pt1 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.968 malloc2 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.968 [2024-11-20 11:27:36.918032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:53.968 [2024-11-20 11:27:36.918097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.968 [2024-11-20 11:27:36.918121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:53.968 [2024-11-20 11:27:36.918131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.968 [2024-11-20 11:27:36.920541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.968 [2024-11-20 11:27:36.920582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:53.968 pt2 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.968 malloc3 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.968 11:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.969 [2024-11-20 11:27:36.989448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:53.969 [2024-11-20 11:27:36.989539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.969 [2024-11-20 11:27:36.989566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:53.969 [2024-11-20 11:27:36.989580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.969 [2024-11-20 11:27:36.992007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.969 [2024-11-20 11:27:36.992051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:53.969 pt3 00:17:53.969 11:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.969 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:53.969 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:53.969 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:53.969 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:53.969 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:53.969 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:53.969 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:53.969 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:53.969 11:27:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:53.969 11:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.969 11:27:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.969 malloc4 00:17:53.969 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.969 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:53.969 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.969 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.969 [2024-11-20 11:27:37.047718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:53.969 [2024-11-20 11:27:37.047779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.969 [2024-11-20 11:27:37.047798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:53.969 [2024-11-20 11:27:37.047808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.969 [2024-11-20 11:27:37.050079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.969 [2024-11-20 11:27:37.050119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:53.969 pt4 00:17:53.969 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.969 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:53.969 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:53.969 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:53.969 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.969 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.969 [2024-11-20 11:27:37.059743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:53.969 [2024-11-20 11:27:37.061732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:53.969 [2024-11-20 11:27:37.061804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:53.969 [2024-11-20 11:27:37.061892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:53.969 [2024-11-20 11:27:37.062106] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:53.969 [2024-11-20 11:27:37.062130] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:53.969 [2024-11-20 11:27:37.062402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:53.969 [2024-11-20 11:27:37.071178] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:53.969 [2024-11-20 11:27:37.071208] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:53.969 [2024-11-20 11:27:37.071428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.969 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.969 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:53.969 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.969 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.969 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:53.969 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.969 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:53.969 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.969 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.969 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.969 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.969 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.969 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.969 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.969 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.226 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.226 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.226 "name": "raid_bdev1", 00:17:54.226 "uuid": "7f1a6398-ada9-4839-8a62-2d1572a4e08d", 00:17:54.226 "strip_size_kb": 64, 00:17:54.226 "state": "online", 00:17:54.226 "raid_level": "raid5f", 00:17:54.226 "superblock": true, 00:17:54.226 "num_base_bdevs": 4, 00:17:54.226 "num_base_bdevs_discovered": 4, 00:17:54.226 "num_base_bdevs_operational": 4, 00:17:54.226 "base_bdevs_list": [ 00:17:54.226 { 00:17:54.226 "name": "pt1", 00:17:54.226 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:54.226 "is_configured": true, 00:17:54.226 "data_offset": 2048, 00:17:54.226 "data_size": 63488 00:17:54.226 }, 00:17:54.226 { 00:17:54.226 "name": "pt2", 00:17:54.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:54.226 "is_configured": true, 00:17:54.226 "data_offset": 2048, 00:17:54.226 "data_size": 63488 00:17:54.226 }, 00:17:54.226 { 00:17:54.226 "name": "pt3", 00:17:54.226 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:54.226 "is_configured": true, 00:17:54.226 "data_offset": 2048, 00:17:54.226 "data_size": 63488 00:17:54.226 }, 00:17:54.226 { 00:17:54.226 "name": "pt4", 00:17:54.226 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:54.226 "is_configured": true, 00:17:54.226 "data_offset": 2048, 00:17:54.226 "data_size": 63488 00:17:54.226 } 00:17:54.226 ] 00:17:54.226 }' 00:17:54.226 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.226 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.484 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:54.484 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:54.484 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:54.484 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:54.484 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:54.484 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:54.484 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:54.484 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.484 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.484 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:54.484 [2024-11-20 11:27:37.536888] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:54.484 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.484 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:54.484 "name": "raid_bdev1", 00:17:54.484 "aliases": [ 00:17:54.484 "7f1a6398-ada9-4839-8a62-2d1572a4e08d" 00:17:54.484 ], 00:17:54.484 "product_name": "Raid Volume", 00:17:54.484 "block_size": 512, 00:17:54.484 "num_blocks": 190464, 00:17:54.484 "uuid": "7f1a6398-ada9-4839-8a62-2d1572a4e08d", 00:17:54.484 "assigned_rate_limits": { 00:17:54.484 "rw_ios_per_sec": 0, 00:17:54.484 "rw_mbytes_per_sec": 0, 00:17:54.484 "r_mbytes_per_sec": 0, 00:17:54.484 "w_mbytes_per_sec": 0 00:17:54.484 }, 00:17:54.484 "claimed": false, 00:17:54.484 "zoned": false, 00:17:54.484 "supported_io_types": { 00:17:54.484 "read": true, 00:17:54.484 "write": true, 00:17:54.484 "unmap": false, 00:17:54.484 "flush": false, 00:17:54.484 "reset": true, 00:17:54.484 "nvme_admin": false, 00:17:54.484 "nvme_io": false, 00:17:54.484 "nvme_io_md": false, 00:17:54.484 "write_zeroes": true, 00:17:54.484 "zcopy": false, 00:17:54.484 "get_zone_info": false, 00:17:54.484 "zone_management": false, 00:17:54.484 "zone_append": false, 00:17:54.484 "compare": false, 00:17:54.484 "compare_and_write": false, 00:17:54.484 "abort": false, 00:17:54.484 "seek_hole": false, 00:17:54.484 "seek_data": false, 00:17:54.484 "copy": false, 00:17:54.484 "nvme_iov_md": false 00:17:54.484 }, 00:17:54.484 "driver_specific": { 00:17:54.484 "raid": { 00:17:54.484 "uuid": "7f1a6398-ada9-4839-8a62-2d1572a4e08d", 00:17:54.484 "strip_size_kb": 64, 00:17:54.484 "state": "online", 00:17:54.484 "raid_level": "raid5f", 00:17:54.484 "superblock": true, 00:17:54.484 "num_base_bdevs": 4, 00:17:54.484 "num_base_bdevs_discovered": 4, 00:17:54.484 "num_base_bdevs_operational": 4, 00:17:54.484 "base_bdevs_list": [ 00:17:54.484 { 00:17:54.484 "name": "pt1", 00:17:54.484 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:54.484 "is_configured": true, 00:17:54.484 "data_offset": 2048, 00:17:54.484 "data_size": 63488 00:17:54.484 }, 00:17:54.484 { 00:17:54.484 "name": "pt2", 00:17:54.484 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:54.484 "is_configured": true, 00:17:54.484 "data_offset": 2048, 00:17:54.484 "data_size": 63488 00:17:54.484 }, 00:17:54.484 { 00:17:54.484 "name": "pt3", 00:17:54.484 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:54.484 "is_configured": true, 00:17:54.484 "data_offset": 2048, 00:17:54.484 "data_size": 63488 00:17:54.484 }, 00:17:54.484 { 00:17:54.484 "name": "pt4", 00:17:54.484 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:54.484 "is_configured": true, 00:17:54.484 "data_offset": 2048, 00:17:54.484 "data_size": 63488 00:17:54.484 } 00:17:54.484 ] 00:17:54.484 } 00:17:54.484 } 00:17:54.484 }' 00:17:54.484 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:54.742 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:54.742 pt2 00:17:54.742 pt3 00:17:54.742 pt4' 00:17:54.742 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.742 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:54.742 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.742 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:54.742 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.742 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.742 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.742 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.742 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:54.742 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:54.742 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.742 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:54.742 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.742 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.742 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.742 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.742 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:54.742 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:54.742 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.742 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.742 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:54.742 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.742 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.742 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.743 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:54.743 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:54.743 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.743 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.743 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:54.743 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.743 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.743 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.743 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:54.743 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:55.002 [2024-11-20 11:27:37.864292] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7f1a6398-ada9-4839-8a62-2d1572a4e08d 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7f1a6398-ada9-4839-8a62-2d1572a4e08d ']' 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.002 [2024-11-20 11:27:37.912029] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:55.002 [2024-11-20 11:27:37.912060] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:55.002 [2024-11-20 11:27:37.912154] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:55.002 [2024-11-20 11:27:37.912253] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:55.002 [2024-11-20 11:27:37.912275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.002 11:27:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.002 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:55.002 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:55.002 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.002 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.002 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.002 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:55.002 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:55.002 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.002 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.002 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.002 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:55.002 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:55.002 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:17:55.002 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:55.002 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:55.002 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.002 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:55.002 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.002 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:55.002 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.002 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.002 [2024-11-20 11:27:38.075723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:55.002 [2024-11-20 11:27:38.077775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:55.003 [2024-11-20 11:27:38.077827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:55.003 [2024-11-20 11:27:38.077861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:55.003 [2024-11-20 11:27:38.077912] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:55.003 [2024-11-20 11:27:38.077966] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:55.003 [2024-11-20 11:27:38.077985] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:55.003 [2024-11-20 11:27:38.078003] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:55.003 [2024-11-20 11:27:38.078016] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:55.003 [2024-11-20 11:27:38.078029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:55.003 request: 00:17:55.003 { 00:17:55.003 "name": "raid_bdev1", 00:17:55.003 "raid_level": "raid5f", 00:17:55.003 "base_bdevs": [ 00:17:55.003 "malloc1", 00:17:55.003 "malloc2", 00:17:55.003 "malloc3", 00:17:55.003 "malloc4" 00:17:55.003 ], 00:17:55.003 "strip_size_kb": 64, 00:17:55.003 "superblock": false, 00:17:55.003 "method": "bdev_raid_create", 00:17:55.003 "req_id": 1 00:17:55.003 } 00:17:55.003 Got JSON-RPC error response 00:17:55.003 response: 00:17:55.003 { 00:17:55.003 "code": -17, 00:17:55.003 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:55.003 } 00:17:55.003 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:55.003 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:17:55.003 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:55.003 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:55.003 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:55.003 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.003 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:55.003 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.003 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.003 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.262 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:55.262 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:55.262 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:55.262 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.262 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.262 [2024-11-20 11:27:38.143614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:55.262 [2024-11-20 11:27:38.143676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.262 [2024-11-20 11:27:38.143694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:55.262 [2024-11-20 11:27:38.143704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.262 [2024-11-20 11:27:38.146038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.262 [2024-11-20 11:27:38.146081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:55.262 [2024-11-20 11:27:38.146167] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:55.262 [2024-11-20 11:27:38.146235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:55.262 pt1 00:17:55.262 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.262 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:55.262 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.262 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.262 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:55.262 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:55.262 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:55.262 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.262 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.262 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.262 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.262 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.262 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.262 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.262 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.262 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.262 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.262 "name": "raid_bdev1", 00:17:55.262 "uuid": "7f1a6398-ada9-4839-8a62-2d1572a4e08d", 00:17:55.262 "strip_size_kb": 64, 00:17:55.262 "state": "configuring", 00:17:55.262 "raid_level": "raid5f", 00:17:55.262 "superblock": true, 00:17:55.262 "num_base_bdevs": 4, 00:17:55.262 "num_base_bdevs_discovered": 1, 00:17:55.262 "num_base_bdevs_operational": 4, 00:17:55.262 "base_bdevs_list": [ 00:17:55.262 { 00:17:55.262 "name": "pt1", 00:17:55.262 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:55.262 "is_configured": true, 00:17:55.262 "data_offset": 2048, 00:17:55.262 "data_size": 63488 00:17:55.262 }, 00:17:55.262 { 00:17:55.262 "name": null, 00:17:55.262 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:55.262 "is_configured": false, 00:17:55.262 "data_offset": 2048, 00:17:55.262 "data_size": 63488 00:17:55.262 }, 00:17:55.262 { 00:17:55.262 "name": null, 00:17:55.262 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:55.262 "is_configured": false, 00:17:55.262 "data_offset": 2048, 00:17:55.262 "data_size": 63488 00:17:55.262 }, 00:17:55.262 { 00:17:55.262 "name": null, 00:17:55.262 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:55.262 "is_configured": false, 00:17:55.262 "data_offset": 2048, 00:17:55.262 "data_size": 63488 00:17:55.262 } 00:17:55.262 ] 00:17:55.262 }' 00:17:55.262 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.262 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.520 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:55.520 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:55.520 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.520 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.520 [2024-11-20 11:27:38.618836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:55.520 [2024-11-20 11:27:38.618916] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.520 [2024-11-20 11:27:38.618939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:55.520 [2024-11-20 11:27:38.618951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.520 [2024-11-20 11:27:38.619460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.520 [2024-11-20 11:27:38.619519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:55.520 [2024-11-20 11:27:38.619617] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:55.520 [2024-11-20 11:27:38.619649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:55.520 pt2 00:17:55.520 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.520 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:55.520 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.520 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.520 [2024-11-20 11:27:38.630826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:55.778 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.778 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:55.778 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.778 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.778 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:55.778 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:55.778 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:55.778 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.778 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.778 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.778 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.778 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.778 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.778 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.778 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.778 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.778 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.778 "name": "raid_bdev1", 00:17:55.778 "uuid": "7f1a6398-ada9-4839-8a62-2d1572a4e08d", 00:17:55.778 "strip_size_kb": 64, 00:17:55.778 "state": "configuring", 00:17:55.778 "raid_level": "raid5f", 00:17:55.778 "superblock": true, 00:17:55.778 "num_base_bdevs": 4, 00:17:55.778 "num_base_bdevs_discovered": 1, 00:17:55.778 "num_base_bdevs_operational": 4, 00:17:55.778 "base_bdevs_list": [ 00:17:55.778 { 00:17:55.778 "name": "pt1", 00:17:55.778 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:55.778 "is_configured": true, 00:17:55.778 "data_offset": 2048, 00:17:55.778 "data_size": 63488 00:17:55.778 }, 00:17:55.778 { 00:17:55.778 "name": null, 00:17:55.778 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:55.778 "is_configured": false, 00:17:55.778 "data_offset": 0, 00:17:55.778 "data_size": 63488 00:17:55.778 }, 00:17:55.778 { 00:17:55.778 "name": null, 00:17:55.778 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:55.778 "is_configured": false, 00:17:55.778 "data_offset": 2048, 00:17:55.778 "data_size": 63488 00:17:55.778 }, 00:17:55.778 { 00:17:55.778 "name": null, 00:17:55.778 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:55.778 "is_configured": false, 00:17:55.778 "data_offset": 2048, 00:17:55.778 "data_size": 63488 00:17:55.778 } 00:17:55.778 ] 00:17:55.778 }' 00:17:55.778 11:27:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.778 11:27:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.035 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:56.035 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:56.035 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:56.035 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.035 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.035 [2024-11-20 11:27:39.090046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:56.035 [2024-11-20 11:27:39.090122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.035 [2024-11-20 11:27:39.090163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:56.035 [2024-11-20 11:27:39.090173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.035 [2024-11-20 11:27:39.090695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.035 [2024-11-20 11:27:39.090723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:56.035 [2024-11-20 11:27:39.090828] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:56.035 [2024-11-20 11:27:39.090855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:56.035 pt2 00:17:56.035 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.036 [2024-11-20 11:27:39.101981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:56.036 [2024-11-20 11:27:39.102033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.036 [2024-11-20 11:27:39.102051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:56.036 [2024-11-20 11:27:39.102059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.036 [2024-11-20 11:27:39.102509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.036 [2024-11-20 11:27:39.102541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:56.036 [2024-11-20 11:27:39.102613] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:56.036 [2024-11-20 11:27:39.102640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:56.036 pt3 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.036 [2024-11-20 11:27:39.113929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:56.036 [2024-11-20 11:27:39.113997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.036 [2024-11-20 11:27:39.114015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:56.036 [2024-11-20 11:27:39.114023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.036 [2024-11-20 11:27:39.114393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.036 [2024-11-20 11:27:39.114416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:56.036 [2024-11-20 11:27:39.114493] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:56.036 [2024-11-20 11:27:39.114513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:56.036 [2024-11-20 11:27:39.114661] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:56.036 [2024-11-20 11:27:39.114677] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:56.036 [2024-11-20 11:27:39.114918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:56.036 [2024-11-20 11:27:39.122303] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:56.036 [2024-11-20 11:27:39.122329] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:56.036 [2024-11-20 11:27:39.122560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.036 pt4 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.036 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.357 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.357 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.357 "name": "raid_bdev1", 00:17:56.357 "uuid": "7f1a6398-ada9-4839-8a62-2d1572a4e08d", 00:17:56.357 "strip_size_kb": 64, 00:17:56.357 "state": "online", 00:17:56.357 "raid_level": "raid5f", 00:17:56.357 "superblock": true, 00:17:56.357 "num_base_bdevs": 4, 00:17:56.357 "num_base_bdevs_discovered": 4, 00:17:56.357 "num_base_bdevs_operational": 4, 00:17:56.357 "base_bdevs_list": [ 00:17:56.357 { 00:17:56.357 "name": "pt1", 00:17:56.357 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:56.357 "is_configured": true, 00:17:56.357 "data_offset": 2048, 00:17:56.357 "data_size": 63488 00:17:56.357 }, 00:17:56.357 { 00:17:56.357 "name": "pt2", 00:17:56.357 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:56.357 "is_configured": true, 00:17:56.357 "data_offset": 2048, 00:17:56.357 "data_size": 63488 00:17:56.357 }, 00:17:56.357 { 00:17:56.357 "name": "pt3", 00:17:56.357 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:56.357 "is_configured": true, 00:17:56.357 "data_offset": 2048, 00:17:56.357 "data_size": 63488 00:17:56.357 }, 00:17:56.357 { 00:17:56.357 "name": "pt4", 00:17:56.357 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:56.357 "is_configured": true, 00:17:56.357 "data_offset": 2048, 00:17:56.357 "data_size": 63488 00:17:56.357 } 00:17:56.357 ] 00:17:56.357 }' 00:17:56.357 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.357 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.630 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:56.630 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:56.630 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:56.630 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:56.630 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:56.630 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:56.630 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:56.630 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:56.631 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.631 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.631 [2024-11-20 11:27:39.551601] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:56.631 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.631 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:56.631 "name": "raid_bdev1", 00:17:56.631 "aliases": [ 00:17:56.631 "7f1a6398-ada9-4839-8a62-2d1572a4e08d" 00:17:56.631 ], 00:17:56.631 "product_name": "Raid Volume", 00:17:56.631 "block_size": 512, 00:17:56.631 "num_blocks": 190464, 00:17:56.631 "uuid": "7f1a6398-ada9-4839-8a62-2d1572a4e08d", 00:17:56.631 "assigned_rate_limits": { 00:17:56.631 "rw_ios_per_sec": 0, 00:17:56.631 "rw_mbytes_per_sec": 0, 00:17:56.631 "r_mbytes_per_sec": 0, 00:17:56.631 "w_mbytes_per_sec": 0 00:17:56.631 }, 00:17:56.631 "claimed": false, 00:17:56.631 "zoned": false, 00:17:56.631 "supported_io_types": { 00:17:56.631 "read": true, 00:17:56.631 "write": true, 00:17:56.631 "unmap": false, 00:17:56.631 "flush": false, 00:17:56.631 "reset": true, 00:17:56.631 "nvme_admin": false, 00:17:56.631 "nvme_io": false, 00:17:56.631 "nvme_io_md": false, 00:17:56.631 "write_zeroes": true, 00:17:56.631 "zcopy": false, 00:17:56.631 "get_zone_info": false, 00:17:56.631 "zone_management": false, 00:17:56.631 "zone_append": false, 00:17:56.631 "compare": false, 00:17:56.631 "compare_and_write": false, 00:17:56.631 "abort": false, 00:17:56.631 "seek_hole": false, 00:17:56.631 "seek_data": false, 00:17:56.631 "copy": false, 00:17:56.631 "nvme_iov_md": false 00:17:56.631 }, 00:17:56.631 "driver_specific": { 00:17:56.631 "raid": { 00:17:56.631 "uuid": "7f1a6398-ada9-4839-8a62-2d1572a4e08d", 00:17:56.631 "strip_size_kb": 64, 00:17:56.631 "state": "online", 00:17:56.631 "raid_level": "raid5f", 00:17:56.631 "superblock": true, 00:17:56.631 "num_base_bdevs": 4, 00:17:56.631 "num_base_bdevs_discovered": 4, 00:17:56.631 "num_base_bdevs_operational": 4, 00:17:56.631 "base_bdevs_list": [ 00:17:56.631 { 00:17:56.631 "name": "pt1", 00:17:56.631 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:56.631 "is_configured": true, 00:17:56.631 "data_offset": 2048, 00:17:56.631 "data_size": 63488 00:17:56.631 }, 00:17:56.631 { 00:17:56.631 "name": "pt2", 00:17:56.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:56.631 "is_configured": true, 00:17:56.631 "data_offset": 2048, 00:17:56.631 "data_size": 63488 00:17:56.631 }, 00:17:56.631 { 00:17:56.631 "name": "pt3", 00:17:56.631 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:56.631 "is_configured": true, 00:17:56.631 "data_offset": 2048, 00:17:56.631 "data_size": 63488 00:17:56.631 }, 00:17:56.631 { 00:17:56.631 "name": "pt4", 00:17:56.631 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:56.631 "is_configured": true, 00:17:56.631 "data_offset": 2048, 00:17:56.631 "data_size": 63488 00:17:56.631 } 00:17:56.631 ] 00:17:56.631 } 00:17:56.631 } 00:17:56.631 }' 00:17:56.631 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:56.631 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:56.631 pt2 00:17:56.631 pt3 00:17:56.631 pt4' 00:17:56.631 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:56.631 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:56.631 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:56.631 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:56.631 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:56.631 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.631 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.631 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.631 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:56.631 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:56.631 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:56.631 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:56.631 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:56.631 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.631 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.890 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.890 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:56.890 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:56.890 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:56.890 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:56.890 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.891 [2024-11-20 11:27:39.895004] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7f1a6398-ada9-4839-8a62-2d1572a4e08d '!=' 7f1a6398-ada9-4839-8a62-2d1572a4e08d ']' 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.891 [2024-11-20 11:27:39.942748] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.891 11:27:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.891 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.891 "name": "raid_bdev1", 00:17:56.891 "uuid": "7f1a6398-ada9-4839-8a62-2d1572a4e08d", 00:17:56.891 "strip_size_kb": 64, 00:17:56.891 "state": "online", 00:17:56.891 "raid_level": "raid5f", 00:17:56.891 "superblock": true, 00:17:56.891 "num_base_bdevs": 4, 00:17:56.891 "num_base_bdevs_discovered": 3, 00:17:56.891 "num_base_bdevs_operational": 3, 00:17:56.891 "base_bdevs_list": [ 00:17:56.891 { 00:17:56.891 "name": null, 00:17:56.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.891 "is_configured": false, 00:17:56.891 "data_offset": 0, 00:17:56.891 "data_size": 63488 00:17:56.891 }, 00:17:56.891 { 00:17:56.891 "name": "pt2", 00:17:56.891 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:56.891 "is_configured": true, 00:17:56.891 "data_offset": 2048, 00:17:56.891 "data_size": 63488 00:17:56.891 }, 00:17:56.891 { 00:17:56.891 "name": "pt3", 00:17:56.891 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:56.891 "is_configured": true, 00:17:56.891 "data_offset": 2048, 00:17:56.891 "data_size": 63488 00:17:56.891 }, 00:17:56.891 { 00:17:56.891 "name": "pt4", 00:17:56.891 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:56.891 "is_configured": true, 00:17:56.891 "data_offset": 2048, 00:17:56.891 "data_size": 63488 00:17:56.891 } 00:17:56.891 ] 00:17:56.891 }' 00:17:56.891 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.891 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.468 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:57.468 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.468 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.468 [2024-11-20 11:27:40.390005] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:57.468 [2024-11-20 11:27:40.390044] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:57.468 [2024-11-20 11:27:40.390153] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.468 [2024-11-20 11:27:40.390242] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.468 [2024-11-20 11:27:40.390253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.469 [2024-11-20 11:27:40.485800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:57.469 [2024-11-20 11:27:40.485859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.469 [2024-11-20 11:27:40.485880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:57.469 [2024-11-20 11:27:40.485889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.469 [2024-11-20 11:27:40.488367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.469 [2024-11-20 11:27:40.488412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:57.469 [2024-11-20 11:27:40.488513] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:57.469 [2024-11-20 11:27:40.488568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:57.469 pt2 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.469 "name": "raid_bdev1", 00:17:57.469 "uuid": "7f1a6398-ada9-4839-8a62-2d1572a4e08d", 00:17:57.469 "strip_size_kb": 64, 00:17:57.469 "state": "configuring", 00:17:57.469 "raid_level": "raid5f", 00:17:57.469 "superblock": true, 00:17:57.469 "num_base_bdevs": 4, 00:17:57.469 "num_base_bdevs_discovered": 1, 00:17:57.469 "num_base_bdevs_operational": 3, 00:17:57.469 "base_bdevs_list": [ 00:17:57.469 { 00:17:57.469 "name": null, 00:17:57.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.469 "is_configured": false, 00:17:57.469 "data_offset": 2048, 00:17:57.469 "data_size": 63488 00:17:57.469 }, 00:17:57.469 { 00:17:57.469 "name": "pt2", 00:17:57.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.469 "is_configured": true, 00:17:57.469 "data_offset": 2048, 00:17:57.469 "data_size": 63488 00:17:57.469 }, 00:17:57.469 { 00:17:57.469 "name": null, 00:17:57.469 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:57.469 "is_configured": false, 00:17:57.469 "data_offset": 2048, 00:17:57.469 "data_size": 63488 00:17:57.469 }, 00:17:57.469 { 00:17:57.469 "name": null, 00:17:57.469 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:57.469 "is_configured": false, 00:17:57.469 "data_offset": 2048, 00:17:57.469 "data_size": 63488 00:17:57.469 } 00:17:57.469 ] 00:17:57.469 }' 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.469 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.037 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:58.037 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:58.037 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:58.037 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.037 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.037 [2024-11-20 11:27:40.937074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:58.037 [2024-11-20 11:27:40.937150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.037 [2024-11-20 11:27:40.937174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:58.037 [2024-11-20 11:27:40.937183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.037 [2024-11-20 11:27:40.937664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.037 [2024-11-20 11:27:40.937692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:58.037 [2024-11-20 11:27:40.937786] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:58.037 [2024-11-20 11:27:40.937821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:58.037 pt3 00:17:58.037 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.037 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:58.037 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.037 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:58.037 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:58.037 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.037 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:58.037 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.037 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.037 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.037 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.037 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.037 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.037 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.037 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.038 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.038 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.038 "name": "raid_bdev1", 00:17:58.038 "uuid": "7f1a6398-ada9-4839-8a62-2d1572a4e08d", 00:17:58.038 "strip_size_kb": 64, 00:17:58.038 "state": "configuring", 00:17:58.038 "raid_level": "raid5f", 00:17:58.038 "superblock": true, 00:17:58.038 "num_base_bdevs": 4, 00:17:58.038 "num_base_bdevs_discovered": 2, 00:17:58.038 "num_base_bdevs_operational": 3, 00:17:58.038 "base_bdevs_list": [ 00:17:58.038 { 00:17:58.038 "name": null, 00:17:58.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.038 "is_configured": false, 00:17:58.038 "data_offset": 2048, 00:17:58.038 "data_size": 63488 00:17:58.038 }, 00:17:58.038 { 00:17:58.038 "name": "pt2", 00:17:58.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.038 "is_configured": true, 00:17:58.038 "data_offset": 2048, 00:17:58.038 "data_size": 63488 00:17:58.038 }, 00:17:58.038 { 00:17:58.038 "name": "pt3", 00:17:58.038 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:58.038 "is_configured": true, 00:17:58.038 "data_offset": 2048, 00:17:58.038 "data_size": 63488 00:17:58.038 }, 00:17:58.038 { 00:17:58.038 "name": null, 00:17:58.038 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:58.038 "is_configured": false, 00:17:58.038 "data_offset": 2048, 00:17:58.038 "data_size": 63488 00:17:58.038 } 00:17:58.038 ] 00:17:58.038 }' 00:17:58.038 11:27:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.038 11:27:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.296 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:58.296 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:58.296 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:17:58.296 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:58.296 11:27:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.296 11:27:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.296 [2024-11-20 11:27:41.400321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:58.296 [2024-11-20 11:27:41.400396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.296 [2024-11-20 11:27:41.400420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:58.296 [2024-11-20 11:27:41.400430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.296 [2024-11-20 11:27:41.400918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.296 [2024-11-20 11:27:41.400947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:58.296 [2024-11-20 11:27:41.401042] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:58.296 [2024-11-20 11:27:41.401068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:58.296 [2024-11-20 11:27:41.401226] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:58.296 [2024-11-20 11:27:41.401243] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:58.296 [2024-11-20 11:27:41.401498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:58.296 [2024-11-20 11:27:41.409353] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:58.296 [2024-11-20 11:27:41.409385] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:58.296 [2024-11-20 11:27:41.409706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.296 pt4 00:17:58.555 11:27:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.555 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:58.555 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.555 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.555 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:58.555 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.555 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:58.555 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.555 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.556 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.556 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.556 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.556 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.556 11:27:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.556 11:27:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.556 11:27:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.556 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.556 "name": "raid_bdev1", 00:17:58.556 "uuid": "7f1a6398-ada9-4839-8a62-2d1572a4e08d", 00:17:58.556 "strip_size_kb": 64, 00:17:58.556 "state": "online", 00:17:58.556 "raid_level": "raid5f", 00:17:58.556 "superblock": true, 00:17:58.556 "num_base_bdevs": 4, 00:17:58.556 "num_base_bdevs_discovered": 3, 00:17:58.556 "num_base_bdevs_operational": 3, 00:17:58.556 "base_bdevs_list": [ 00:17:58.556 { 00:17:58.556 "name": null, 00:17:58.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.556 "is_configured": false, 00:17:58.556 "data_offset": 2048, 00:17:58.556 "data_size": 63488 00:17:58.556 }, 00:17:58.556 { 00:17:58.556 "name": "pt2", 00:17:58.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.556 "is_configured": true, 00:17:58.556 "data_offset": 2048, 00:17:58.556 "data_size": 63488 00:17:58.556 }, 00:17:58.556 { 00:17:58.556 "name": "pt3", 00:17:58.556 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:58.556 "is_configured": true, 00:17:58.556 "data_offset": 2048, 00:17:58.556 "data_size": 63488 00:17:58.556 }, 00:17:58.556 { 00:17:58.556 "name": "pt4", 00:17:58.556 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:58.556 "is_configured": true, 00:17:58.556 "data_offset": 2048, 00:17:58.556 "data_size": 63488 00:17:58.556 } 00:17:58.556 ] 00:17:58.556 }' 00:17:58.556 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.556 11:27:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.816 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:58.816 11:27:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.816 11:27:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.816 [2024-11-20 11:27:41.839401] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:58.816 [2024-11-20 11:27:41.839438] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:58.816 [2024-11-20 11:27:41.839544] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.816 [2024-11-20 11:27:41.839624] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.816 [2024-11-20 11:27:41.839638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:58.816 11:27:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.816 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.816 11:27:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.816 11:27:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.816 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:58.816 11:27:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.816 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:58.816 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:58.816 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:17:58.816 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:17:58.816 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:17:58.816 11:27:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.816 11:27:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.816 11:27:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.816 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:58.816 11:27:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.817 11:27:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.817 [2024-11-20 11:27:41.915261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:58.817 [2024-11-20 11:27:41.915337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.817 [2024-11-20 11:27:41.915365] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:17:58.817 [2024-11-20 11:27:41.915378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.817 [2024-11-20 11:27:41.918072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.817 [2024-11-20 11:27:41.918120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:58.817 [2024-11-20 11:27:41.918230] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:58.817 [2024-11-20 11:27:41.918329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:58.817 [2024-11-20 11:27:41.918530] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:58.817 [2024-11-20 11:27:41.918555] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:58.817 [2024-11-20 11:27:41.918574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:58.817 [2024-11-20 11:27:41.918656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:58.817 [2024-11-20 11:27:41.918783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:58.817 pt1 00:17:58.817 11:27:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.817 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:17:58.817 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:58.817 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.817 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:58.817 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:58.817 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.817 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:58.817 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.817 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.817 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.817 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.817 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.817 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.817 11:27:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.817 11:27:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.077 11:27:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.077 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.077 "name": "raid_bdev1", 00:17:59.077 "uuid": "7f1a6398-ada9-4839-8a62-2d1572a4e08d", 00:17:59.077 "strip_size_kb": 64, 00:17:59.077 "state": "configuring", 00:17:59.077 "raid_level": "raid5f", 00:17:59.077 "superblock": true, 00:17:59.077 "num_base_bdevs": 4, 00:17:59.077 "num_base_bdevs_discovered": 2, 00:17:59.077 "num_base_bdevs_operational": 3, 00:17:59.077 "base_bdevs_list": [ 00:17:59.077 { 00:17:59.077 "name": null, 00:17:59.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.077 "is_configured": false, 00:17:59.077 "data_offset": 2048, 00:17:59.077 "data_size": 63488 00:17:59.077 }, 00:17:59.077 { 00:17:59.077 "name": "pt2", 00:17:59.077 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.077 "is_configured": true, 00:17:59.077 "data_offset": 2048, 00:17:59.077 "data_size": 63488 00:17:59.077 }, 00:17:59.077 { 00:17:59.077 "name": "pt3", 00:17:59.077 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:59.077 "is_configured": true, 00:17:59.077 "data_offset": 2048, 00:17:59.077 "data_size": 63488 00:17:59.077 }, 00:17:59.077 { 00:17:59.077 "name": null, 00:17:59.077 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:59.077 "is_configured": false, 00:17:59.077 "data_offset": 2048, 00:17:59.077 "data_size": 63488 00:17:59.077 } 00:17:59.077 ] 00:17:59.077 }' 00:17:59.077 11:27:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.077 11:27:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.338 11:27:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:59.338 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.338 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.338 11:27:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:59.338 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.338 11:27:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:59.338 11:27:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:59.338 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.338 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.338 [2024-11-20 11:27:42.410495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:59.338 [2024-11-20 11:27:42.410613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.338 [2024-11-20 11:27:42.410661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:59.338 [2024-11-20 11:27:42.410693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.338 [2024-11-20 11:27:42.411195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.338 [2024-11-20 11:27:42.411262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:59.338 [2024-11-20 11:27:42.411381] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:59.338 [2024-11-20 11:27:42.411448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:59.338 [2024-11-20 11:27:42.411713] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:59.338 [2024-11-20 11:27:42.411757] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:59.338 [2024-11-20 11:27:42.412057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:59.338 [2024-11-20 11:27:42.421195] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:59.338 [2024-11-20 11:27:42.421278] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:59.338 [2024-11-20 11:27:42.421664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.338 pt4 00:17:59.338 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.338 11:27:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:59.338 11:27:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.338 11:27:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.338 11:27:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:59.338 11:27:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:59.338 11:27:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:59.338 11:27:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.338 11:27:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.338 11:27:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.339 11:27:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.339 11:27:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.339 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.339 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.339 11:27:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.339 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.598 11:27:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.598 "name": "raid_bdev1", 00:17:59.598 "uuid": "7f1a6398-ada9-4839-8a62-2d1572a4e08d", 00:17:59.598 "strip_size_kb": 64, 00:17:59.598 "state": "online", 00:17:59.598 "raid_level": "raid5f", 00:17:59.598 "superblock": true, 00:17:59.598 "num_base_bdevs": 4, 00:17:59.598 "num_base_bdevs_discovered": 3, 00:17:59.598 "num_base_bdevs_operational": 3, 00:17:59.598 "base_bdevs_list": [ 00:17:59.598 { 00:17:59.598 "name": null, 00:17:59.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.598 "is_configured": false, 00:17:59.598 "data_offset": 2048, 00:17:59.598 "data_size": 63488 00:17:59.598 }, 00:17:59.598 { 00:17:59.598 "name": "pt2", 00:17:59.598 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.598 "is_configured": true, 00:17:59.598 "data_offset": 2048, 00:17:59.598 "data_size": 63488 00:17:59.598 }, 00:17:59.598 { 00:17:59.598 "name": "pt3", 00:17:59.598 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:59.598 "is_configured": true, 00:17:59.598 "data_offset": 2048, 00:17:59.598 "data_size": 63488 00:17:59.598 }, 00:17:59.598 { 00:17:59.598 "name": "pt4", 00:17:59.598 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:59.598 "is_configured": true, 00:17:59.598 "data_offset": 2048, 00:17:59.598 "data_size": 63488 00:17:59.598 } 00:17:59.598 ] 00:17:59.598 }' 00:17:59.598 11:27:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.598 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.870 11:27:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:59.870 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.870 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.870 11:27:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:59.870 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.870 11:27:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:59.870 11:27:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:59.870 11:27:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:59.870 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.870 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.870 [2024-11-20 11:27:42.871041] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.870 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.870 11:27:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7f1a6398-ada9-4839-8a62-2d1572a4e08d '!=' 7f1a6398-ada9-4839-8a62-2d1572a4e08d ']' 00:17:59.870 11:27:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84326 00:17:59.870 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84326 ']' 00:17:59.870 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84326 00:17:59.870 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:59.870 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.870 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84326 00:17:59.870 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:59.870 killing process with pid 84326 00:17:59.870 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:59.870 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84326' 00:17:59.870 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84326 00:17:59.870 [2024-11-20 11:27:42.940272] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:59.870 [2024-11-20 11:27:42.940391] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:59.870 11:27:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84326 00:17:59.870 [2024-11-20 11:27:42.940502] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:59.870 [2024-11-20 11:27:42.940520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:00.453 [2024-11-20 11:27:43.412122] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:01.863 11:27:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:01.863 00:18:01.863 real 0m8.779s 00:18:01.863 user 0m13.788s 00:18:01.863 sys 0m1.521s 00:18:01.863 11:27:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:01.863 11:27:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.863 ************************************ 00:18:01.863 END TEST raid5f_superblock_test 00:18:01.863 ************************************ 00:18:01.863 11:27:44 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:18:01.863 11:27:44 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:18:01.863 11:27:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:01.863 11:27:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:01.863 11:27:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:01.863 ************************************ 00:18:01.863 START TEST raid5f_rebuild_test 00:18:01.863 ************************************ 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84817 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84817 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84817 ']' 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.863 11:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.864 11:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.864 11:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.864 11:27:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.864 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:01.864 Zero copy mechanism will not be used. 00:18:01.864 [2024-11-20 11:27:44.754423] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:18:01.864 [2024-11-20 11:27:44.754552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84817 ] 00:18:01.864 [2024-11-20 11:27:44.932340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.123 [2024-11-20 11:27:45.043670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.382 [2024-11-20 11:27:45.253651] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.382 [2024-11-20 11:27:45.253683] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.642 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.642 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:18:02.642 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:02.642 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:02.642 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.642 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.642 BaseBdev1_malloc 00:18:02.642 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.642 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:02.642 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.642 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.642 [2024-11-20 11:27:45.646912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:02.642 [2024-11-20 11:27:45.647037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.642 [2024-11-20 11:27:45.647128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:02.642 [2024-11-20 11:27:45.647165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.642 [2024-11-20 11:27:45.649405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.642 [2024-11-20 11:27:45.649497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:02.642 BaseBdev1 00:18:02.642 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.642 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:02.642 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:02.642 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.642 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.642 BaseBdev2_malloc 00:18:02.642 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.642 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:02.642 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.642 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.642 [2024-11-20 11:27:45.703385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:02.642 [2024-11-20 11:27:45.703514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.642 [2024-11-20 11:27:45.703537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:02.642 [2024-11-20 11:27:45.703549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.642 [2024-11-20 11:27:45.705670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.642 [2024-11-20 11:27:45.705709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:02.642 BaseBdev2 00:18:02.642 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.642 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:02.642 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:02.642 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.642 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.902 BaseBdev3_malloc 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.902 [2024-11-20 11:27:45.772139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:02.902 [2024-11-20 11:27:45.772245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.902 [2024-11-20 11:27:45.772283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:02.902 [2024-11-20 11:27:45.772319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.902 [2024-11-20 11:27:45.774348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.902 [2024-11-20 11:27:45.774442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:02.902 BaseBdev3 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.902 BaseBdev4_malloc 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.902 [2024-11-20 11:27:45.825492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:02.902 [2024-11-20 11:27:45.825547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.902 [2024-11-20 11:27:45.825567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:02.902 [2024-11-20 11:27:45.825577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.902 [2024-11-20 11:27:45.827610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.902 [2024-11-20 11:27:45.827654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:02.902 BaseBdev4 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.902 spare_malloc 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.902 spare_delay 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.902 [2024-11-20 11:27:45.892709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:02.902 [2024-11-20 11:27:45.892824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.902 [2024-11-20 11:27:45.892865] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:02.902 [2024-11-20 11:27:45.892915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.902 [2024-11-20 11:27:45.895035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.902 [2024-11-20 11:27:45.895109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:02.902 spare 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.902 [2024-11-20 11:27:45.904738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:02.902 [2024-11-20 11:27:45.906576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:02.902 [2024-11-20 11:27:45.906674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:02.902 [2024-11-20 11:27:45.906729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:02.902 [2024-11-20 11:27:45.906813] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:02.902 [2024-11-20 11:27:45.906825] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:02.902 [2024-11-20 11:27:45.907072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:02.902 [2024-11-20 11:27:45.914409] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:02.902 [2024-11-20 11:27:45.914429] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:02.902 [2024-11-20 11:27:45.914626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.902 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.902 "name": "raid_bdev1", 00:18:02.902 "uuid": "22290a4e-ec5d-4909-b7d3-9355ec863c18", 00:18:02.902 "strip_size_kb": 64, 00:18:02.902 "state": "online", 00:18:02.902 "raid_level": "raid5f", 00:18:02.902 "superblock": false, 00:18:02.902 "num_base_bdevs": 4, 00:18:02.902 "num_base_bdevs_discovered": 4, 00:18:02.902 "num_base_bdevs_operational": 4, 00:18:02.902 "base_bdevs_list": [ 00:18:02.902 { 00:18:02.902 "name": "BaseBdev1", 00:18:02.902 "uuid": "2100158a-5757-5e27-9a55-f753c772c0ad", 00:18:02.902 "is_configured": true, 00:18:02.902 "data_offset": 0, 00:18:02.902 "data_size": 65536 00:18:02.902 }, 00:18:02.902 { 00:18:02.902 "name": "BaseBdev2", 00:18:02.902 "uuid": "26fc3072-a745-5a24-b662-a46897eb6208", 00:18:02.902 "is_configured": true, 00:18:02.902 "data_offset": 0, 00:18:02.902 "data_size": 65536 00:18:02.902 }, 00:18:02.902 { 00:18:02.902 "name": "BaseBdev3", 00:18:02.902 "uuid": "4b4d228c-bbda-5cd0-b7e9-fd478e8d4cde", 00:18:02.902 "is_configured": true, 00:18:02.902 "data_offset": 0, 00:18:02.902 "data_size": 65536 00:18:02.902 }, 00:18:02.902 { 00:18:02.902 "name": "BaseBdev4", 00:18:02.902 "uuid": "3c2e222a-86c1-5b80-a837-1c4deeb6225a", 00:18:02.902 "is_configured": true, 00:18:02.903 "data_offset": 0, 00:18:02.903 "data_size": 65536 00:18:02.903 } 00:18:02.903 ] 00:18:02.903 }' 00:18:02.903 11:27:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.903 11:27:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.471 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:03.471 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:03.471 11:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.471 11:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.471 [2024-11-20 11:27:46.406548] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:03.471 11:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.471 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:18:03.471 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:03.472 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.472 11:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.472 11:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.472 11:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.472 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:03.472 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:03.472 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:03.472 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:03.472 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:03.472 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:03.472 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:03.472 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:03.472 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:03.472 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:03.472 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:03.472 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:03.472 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:03.472 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:03.730 [2024-11-20 11:27:46.689876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:03.730 /dev/nbd0 00:18:03.730 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:03.730 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:03.730 11:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:03.730 11:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:03.730 11:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:03.730 11:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:03.730 11:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:03.730 11:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:03.730 11:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:03.730 11:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:03.730 11:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:03.730 1+0 records in 00:18:03.730 1+0 records out 00:18:03.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406532 s, 10.1 MB/s 00:18:03.730 11:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.730 11:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:03.730 11:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.730 11:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:03.730 11:27:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:03.730 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:03.730 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:03.730 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:03.730 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:03.730 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:03.730 11:27:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:18:04.300 512+0 records in 00:18:04.300 512+0 records out 00:18:04.300 100663296 bytes (101 MB, 96 MiB) copied, 0.472239 s, 213 MB/s 00:18:04.300 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:04.300 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:04.300 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:04.300 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:04.300 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:04.300 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:04.300 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:04.559 [2024-11-20 11:27:47.454516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.559 [2024-11-20 11:27:47.493322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.559 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.559 "name": "raid_bdev1", 00:18:04.559 "uuid": "22290a4e-ec5d-4909-b7d3-9355ec863c18", 00:18:04.559 "strip_size_kb": 64, 00:18:04.559 "state": "online", 00:18:04.559 "raid_level": "raid5f", 00:18:04.559 "superblock": false, 00:18:04.559 "num_base_bdevs": 4, 00:18:04.559 "num_base_bdevs_discovered": 3, 00:18:04.559 "num_base_bdevs_operational": 3, 00:18:04.559 "base_bdevs_list": [ 00:18:04.559 { 00:18:04.559 "name": null, 00:18:04.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.559 "is_configured": false, 00:18:04.559 "data_offset": 0, 00:18:04.559 "data_size": 65536 00:18:04.559 }, 00:18:04.559 { 00:18:04.559 "name": "BaseBdev2", 00:18:04.559 "uuid": "26fc3072-a745-5a24-b662-a46897eb6208", 00:18:04.559 "is_configured": true, 00:18:04.559 "data_offset": 0, 00:18:04.559 "data_size": 65536 00:18:04.559 }, 00:18:04.559 { 00:18:04.559 "name": "BaseBdev3", 00:18:04.559 "uuid": "4b4d228c-bbda-5cd0-b7e9-fd478e8d4cde", 00:18:04.559 "is_configured": true, 00:18:04.559 "data_offset": 0, 00:18:04.560 "data_size": 65536 00:18:04.560 }, 00:18:04.560 { 00:18:04.560 "name": "BaseBdev4", 00:18:04.560 "uuid": "3c2e222a-86c1-5b80-a837-1c4deeb6225a", 00:18:04.560 "is_configured": true, 00:18:04.560 "data_offset": 0, 00:18:04.560 "data_size": 65536 00:18:04.560 } 00:18:04.560 ] 00:18:04.560 }' 00:18:04.560 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.560 11:27:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.129 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:05.129 11:27:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.129 11:27:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.129 [2024-11-20 11:27:47.956616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:05.129 [2024-11-20 11:27:47.976050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:18:05.129 11:27:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.129 11:27:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:05.129 [2024-11-20 11:27:47.988315] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:06.065 11:27:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:06.065 11:27:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.065 11:27:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:06.065 11:27:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:06.066 11:27:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.066 11:27:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.066 11:27:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.066 11:27:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.066 11:27:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.066 11:27:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.066 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.066 "name": "raid_bdev1", 00:18:06.066 "uuid": "22290a4e-ec5d-4909-b7d3-9355ec863c18", 00:18:06.066 "strip_size_kb": 64, 00:18:06.066 "state": "online", 00:18:06.066 "raid_level": "raid5f", 00:18:06.066 "superblock": false, 00:18:06.066 "num_base_bdevs": 4, 00:18:06.066 "num_base_bdevs_discovered": 4, 00:18:06.066 "num_base_bdevs_operational": 4, 00:18:06.066 "process": { 00:18:06.066 "type": "rebuild", 00:18:06.066 "target": "spare", 00:18:06.066 "progress": { 00:18:06.066 "blocks": 17280, 00:18:06.066 "percent": 8 00:18:06.066 } 00:18:06.066 }, 00:18:06.066 "base_bdevs_list": [ 00:18:06.066 { 00:18:06.066 "name": "spare", 00:18:06.066 "uuid": "547211df-49e5-52a0-be33-7a83539eef6d", 00:18:06.066 "is_configured": true, 00:18:06.066 "data_offset": 0, 00:18:06.066 "data_size": 65536 00:18:06.066 }, 00:18:06.066 { 00:18:06.066 "name": "BaseBdev2", 00:18:06.066 "uuid": "26fc3072-a745-5a24-b662-a46897eb6208", 00:18:06.066 "is_configured": true, 00:18:06.066 "data_offset": 0, 00:18:06.066 "data_size": 65536 00:18:06.066 }, 00:18:06.066 { 00:18:06.066 "name": "BaseBdev3", 00:18:06.066 "uuid": "4b4d228c-bbda-5cd0-b7e9-fd478e8d4cde", 00:18:06.066 "is_configured": true, 00:18:06.066 "data_offset": 0, 00:18:06.066 "data_size": 65536 00:18:06.066 }, 00:18:06.066 { 00:18:06.066 "name": "BaseBdev4", 00:18:06.066 "uuid": "3c2e222a-86c1-5b80-a837-1c4deeb6225a", 00:18:06.066 "is_configured": true, 00:18:06.066 "data_offset": 0, 00:18:06.066 "data_size": 65536 00:18:06.066 } 00:18:06.066 ] 00:18:06.066 }' 00:18:06.066 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.066 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:06.066 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.066 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:06.066 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:06.066 11:27:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.066 11:27:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.066 [2024-11-20 11:27:49.127863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:06.324 [2024-11-20 11:27:49.197664] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:06.324 [2024-11-20 11:27:49.197875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.324 [2024-11-20 11:27:49.197945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:06.324 [2024-11-20 11:27:49.198006] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:06.324 11:27:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.324 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:06.324 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.324 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.324 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:06.324 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:06.324 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:06.324 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.324 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.324 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.324 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.324 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.324 11:27:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.324 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.324 11:27:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.324 11:27:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.324 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.324 "name": "raid_bdev1", 00:18:06.324 "uuid": "22290a4e-ec5d-4909-b7d3-9355ec863c18", 00:18:06.324 "strip_size_kb": 64, 00:18:06.324 "state": "online", 00:18:06.324 "raid_level": "raid5f", 00:18:06.324 "superblock": false, 00:18:06.324 "num_base_bdevs": 4, 00:18:06.324 "num_base_bdevs_discovered": 3, 00:18:06.324 "num_base_bdevs_operational": 3, 00:18:06.324 "base_bdevs_list": [ 00:18:06.324 { 00:18:06.324 "name": null, 00:18:06.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.324 "is_configured": false, 00:18:06.324 "data_offset": 0, 00:18:06.324 "data_size": 65536 00:18:06.324 }, 00:18:06.324 { 00:18:06.324 "name": "BaseBdev2", 00:18:06.324 "uuid": "26fc3072-a745-5a24-b662-a46897eb6208", 00:18:06.324 "is_configured": true, 00:18:06.324 "data_offset": 0, 00:18:06.324 "data_size": 65536 00:18:06.324 }, 00:18:06.324 { 00:18:06.324 "name": "BaseBdev3", 00:18:06.324 "uuid": "4b4d228c-bbda-5cd0-b7e9-fd478e8d4cde", 00:18:06.324 "is_configured": true, 00:18:06.324 "data_offset": 0, 00:18:06.324 "data_size": 65536 00:18:06.324 }, 00:18:06.324 { 00:18:06.324 "name": "BaseBdev4", 00:18:06.324 "uuid": "3c2e222a-86c1-5b80-a837-1c4deeb6225a", 00:18:06.324 "is_configured": true, 00:18:06.324 "data_offset": 0, 00:18:06.324 "data_size": 65536 00:18:06.324 } 00:18:06.324 ] 00:18:06.324 }' 00:18:06.324 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.324 11:27:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.892 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:06.892 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.892 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:06.892 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:06.892 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.892 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.892 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.892 11:27:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.892 11:27:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.892 11:27:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.892 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.892 "name": "raid_bdev1", 00:18:06.892 "uuid": "22290a4e-ec5d-4909-b7d3-9355ec863c18", 00:18:06.892 "strip_size_kb": 64, 00:18:06.892 "state": "online", 00:18:06.892 "raid_level": "raid5f", 00:18:06.892 "superblock": false, 00:18:06.892 "num_base_bdevs": 4, 00:18:06.892 "num_base_bdevs_discovered": 3, 00:18:06.892 "num_base_bdevs_operational": 3, 00:18:06.892 "base_bdevs_list": [ 00:18:06.892 { 00:18:06.892 "name": null, 00:18:06.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.892 "is_configured": false, 00:18:06.892 "data_offset": 0, 00:18:06.892 "data_size": 65536 00:18:06.892 }, 00:18:06.893 { 00:18:06.893 "name": "BaseBdev2", 00:18:06.893 "uuid": "26fc3072-a745-5a24-b662-a46897eb6208", 00:18:06.893 "is_configured": true, 00:18:06.893 "data_offset": 0, 00:18:06.893 "data_size": 65536 00:18:06.893 }, 00:18:06.893 { 00:18:06.893 "name": "BaseBdev3", 00:18:06.893 "uuid": "4b4d228c-bbda-5cd0-b7e9-fd478e8d4cde", 00:18:06.893 "is_configured": true, 00:18:06.893 "data_offset": 0, 00:18:06.893 "data_size": 65536 00:18:06.893 }, 00:18:06.893 { 00:18:06.893 "name": "BaseBdev4", 00:18:06.893 "uuid": "3c2e222a-86c1-5b80-a837-1c4deeb6225a", 00:18:06.893 "is_configured": true, 00:18:06.893 "data_offset": 0, 00:18:06.893 "data_size": 65536 00:18:06.893 } 00:18:06.893 ] 00:18:06.893 }' 00:18:06.893 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.893 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:06.893 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.893 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:06.893 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:06.893 11:27:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.893 11:27:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.893 [2024-11-20 11:27:49.907651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:06.893 [2024-11-20 11:27:49.925899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:18:06.893 11:27:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.893 11:27:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:06.893 [2024-11-20 11:27:49.937529] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:07.830 11:27:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.830 11:27:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.830 11:27:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:07.830 11:27:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:07.830 11:27:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.830 11:27:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.830 11:27:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.830 11:27:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.830 11:27:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.088 11:27:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.088 11:27:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.088 "name": "raid_bdev1", 00:18:08.088 "uuid": "22290a4e-ec5d-4909-b7d3-9355ec863c18", 00:18:08.088 "strip_size_kb": 64, 00:18:08.088 "state": "online", 00:18:08.088 "raid_level": "raid5f", 00:18:08.088 "superblock": false, 00:18:08.088 "num_base_bdevs": 4, 00:18:08.088 "num_base_bdevs_discovered": 4, 00:18:08.088 "num_base_bdevs_operational": 4, 00:18:08.088 "process": { 00:18:08.088 "type": "rebuild", 00:18:08.088 "target": "spare", 00:18:08.088 "progress": { 00:18:08.088 "blocks": 17280, 00:18:08.088 "percent": 8 00:18:08.088 } 00:18:08.088 }, 00:18:08.088 "base_bdevs_list": [ 00:18:08.088 { 00:18:08.088 "name": "spare", 00:18:08.088 "uuid": "547211df-49e5-52a0-be33-7a83539eef6d", 00:18:08.088 "is_configured": true, 00:18:08.088 "data_offset": 0, 00:18:08.088 "data_size": 65536 00:18:08.088 }, 00:18:08.088 { 00:18:08.088 "name": "BaseBdev2", 00:18:08.088 "uuid": "26fc3072-a745-5a24-b662-a46897eb6208", 00:18:08.088 "is_configured": true, 00:18:08.088 "data_offset": 0, 00:18:08.088 "data_size": 65536 00:18:08.088 }, 00:18:08.088 { 00:18:08.088 "name": "BaseBdev3", 00:18:08.088 "uuid": "4b4d228c-bbda-5cd0-b7e9-fd478e8d4cde", 00:18:08.088 "is_configured": true, 00:18:08.088 "data_offset": 0, 00:18:08.088 "data_size": 65536 00:18:08.088 }, 00:18:08.088 { 00:18:08.088 "name": "BaseBdev4", 00:18:08.088 "uuid": "3c2e222a-86c1-5b80-a837-1c4deeb6225a", 00:18:08.088 "is_configured": true, 00:18:08.088 "data_offset": 0, 00:18:08.088 "data_size": 65536 00:18:08.088 } 00:18:08.088 ] 00:18:08.088 }' 00:18:08.088 11:27:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.088 11:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:08.088 11:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.088 11:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.088 11:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:08.088 11:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:08.088 11:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:08.088 11:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=637 00:18:08.088 11:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:08.088 11:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:08.088 11:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.088 11:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:08.088 11:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:08.088 11:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.088 11:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.088 11:27:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.088 11:27:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.088 11:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.088 11:27:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.088 11:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.088 "name": "raid_bdev1", 00:18:08.088 "uuid": "22290a4e-ec5d-4909-b7d3-9355ec863c18", 00:18:08.088 "strip_size_kb": 64, 00:18:08.088 "state": "online", 00:18:08.088 "raid_level": "raid5f", 00:18:08.088 "superblock": false, 00:18:08.088 "num_base_bdevs": 4, 00:18:08.088 "num_base_bdevs_discovered": 4, 00:18:08.088 "num_base_bdevs_operational": 4, 00:18:08.088 "process": { 00:18:08.088 "type": "rebuild", 00:18:08.088 "target": "spare", 00:18:08.088 "progress": { 00:18:08.088 "blocks": 21120, 00:18:08.088 "percent": 10 00:18:08.088 } 00:18:08.088 }, 00:18:08.088 "base_bdevs_list": [ 00:18:08.088 { 00:18:08.088 "name": "spare", 00:18:08.088 "uuid": "547211df-49e5-52a0-be33-7a83539eef6d", 00:18:08.088 "is_configured": true, 00:18:08.088 "data_offset": 0, 00:18:08.088 "data_size": 65536 00:18:08.088 }, 00:18:08.088 { 00:18:08.088 "name": "BaseBdev2", 00:18:08.088 "uuid": "26fc3072-a745-5a24-b662-a46897eb6208", 00:18:08.088 "is_configured": true, 00:18:08.088 "data_offset": 0, 00:18:08.088 "data_size": 65536 00:18:08.088 }, 00:18:08.088 { 00:18:08.088 "name": "BaseBdev3", 00:18:08.088 "uuid": "4b4d228c-bbda-5cd0-b7e9-fd478e8d4cde", 00:18:08.088 "is_configured": true, 00:18:08.088 "data_offset": 0, 00:18:08.088 "data_size": 65536 00:18:08.088 }, 00:18:08.088 { 00:18:08.088 "name": "BaseBdev4", 00:18:08.088 "uuid": "3c2e222a-86c1-5b80-a837-1c4deeb6225a", 00:18:08.088 "is_configured": true, 00:18:08.088 "data_offset": 0, 00:18:08.088 "data_size": 65536 00:18:08.088 } 00:18:08.088 ] 00:18:08.088 }' 00:18:08.088 11:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.088 11:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:08.349 11:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.349 11:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.349 11:27:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:09.287 11:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:09.287 11:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.287 11:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.287 11:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:09.287 11:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:09.287 11:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.287 11:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.287 11:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.287 11:27:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.287 11:27:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.287 11:27:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.287 11:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.287 "name": "raid_bdev1", 00:18:09.287 "uuid": "22290a4e-ec5d-4909-b7d3-9355ec863c18", 00:18:09.287 "strip_size_kb": 64, 00:18:09.287 "state": "online", 00:18:09.287 "raid_level": "raid5f", 00:18:09.287 "superblock": false, 00:18:09.287 "num_base_bdevs": 4, 00:18:09.287 "num_base_bdevs_discovered": 4, 00:18:09.287 "num_base_bdevs_operational": 4, 00:18:09.287 "process": { 00:18:09.287 "type": "rebuild", 00:18:09.287 "target": "spare", 00:18:09.287 "progress": { 00:18:09.287 "blocks": 44160, 00:18:09.287 "percent": 22 00:18:09.287 } 00:18:09.287 }, 00:18:09.287 "base_bdevs_list": [ 00:18:09.287 { 00:18:09.287 "name": "spare", 00:18:09.287 "uuid": "547211df-49e5-52a0-be33-7a83539eef6d", 00:18:09.287 "is_configured": true, 00:18:09.287 "data_offset": 0, 00:18:09.287 "data_size": 65536 00:18:09.287 }, 00:18:09.287 { 00:18:09.287 "name": "BaseBdev2", 00:18:09.287 "uuid": "26fc3072-a745-5a24-b662-a46897eb6208", 00:18:09.287 "is_configured": true, 00:18:09.287 "data_offset": 0, 00:18:09.287 "data_size": 65536 00:18:09.287 }, 00:18:09.287 { 00:18:09.287 "name": "BaseBdev3", 00:18:09.287 "uuid": "4b4d228c-bbda-5cd0-b7e9-fd478e8d4cde", 00:18:09.287 "is_configured": true, 00:18:09.287 "data_offset": 0, 00:18:09.287 "data_size": 65536 00:18:09.287 }, 00:18:09.287 { 00:18:09.287 "name": "BaseBdev4", 00:18:09.287 "uuid": "3c2e222a-86c1-5b80-a837-1c4deeb6225a", 00:18:09.287 "is_configured": true, 00:18:09.287 "data_offset": 0, 00:18:09.287 "data_size": 65536 00:18:09.287 } 00:18:09.287 ] 00:18:09.287 }' 00:18:09.287 11:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.287 11:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.287 11:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.287 11:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:09.287 11:27:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:10.667 11:27:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:10.667 11:27:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:10.667 11:27:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.667 11:27:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:10.667 11:27:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:10.667 11:27:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.667 11:27:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.667 11:27:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.667 11:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.667 11:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.667 11:27:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.667 11:27:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.667 "name": "raid_bdev1", 00:18:10.667 "uuid": "22290a4e-ec5d-4909-b7d3-9355ec863c18", 00:18:10.667 "strip_size_kb": 64, 00:18:10.667 "state": "online", 00:18:10.667 "raid_level": "raid5f", 00:18:10.667 "superblock": false, 00:18:10.667 "num_base_bdevs": 4, 00:18:10.667 "num_base_bdevs_discovered": 4, 00:18:10.667 "num_base_bdevs_operational": 4, 00:18:10.667 "process": { 00:18:10.667 "type": "rebuild", 00:18:10.667 "target": "spare", 00:18:10.667 "progress": { 00:18:10.667 "blocks": 65280, 00:18:10.667 "percent": 33 00:18:10.667 } 00:18:10.667 }, 00:18:10.667 "base_bdevs_list": [ 00:18:10.667 { 00:18:10.667 "name": "spare", 00:18:10.667 "uuid": "547211df-49e5-52a0-be33-7a83539eef6d", 00:18:10.667 "is_configured": true, 00:18:10.667 "data_offset": 0, 00:18:10.667 "data_size": 65536 00:18:10.667 }, 00:18:10.667 { 00:18:10.667 "name": "BaseBdev2", 00:18:10.667 "uuid": "26fc3072-a745-5a24-b662-a46897eb6208", 00:18:10.667 "is_configured": true, 00:18:10.667 "data_offset": 0, 00:18:10.667 "data_size": 65536 00:18:10.667 }, 00:18:10.667 { 00:18:10.667 "name": "BaseBdev3", 00:18:10.667 "uuid": "4b4d228c-bbda-5cd0-b7e9-fd478e8d4cde", 00:18:10.667 "is_configured": true, 00:18:10.667 "data_offset": 0, 00:18:10.667 "data_size": 65536 00:18:10.667 }, 00:18:10.667 { 00:18:10.667 "name": "BaseBdev4", 00:18:10.667 "uuid": "3c2e222a-86c1-5b80-a837-1c4deeb6225a", 00:18:10.667 "is_configured": true, 00:18:10.667 "data_offset": 0, 00:18:10.667 "data_size": 65536 00:18:10.667 } 00:18:10.667 ] 00:18:10.667 }' 00:18:10.667 11:27:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.667 11:27:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:10.667 11:27:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.667 11:27:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:10.667 11:27:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:11.604 11:27:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:11.604 11:27:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.604 11:27:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.604 11:27:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.604 11:27:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.604 11:27:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.604 11:27:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.604 11:27:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.604 11:27:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.604 11:27:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.604 11:27:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.604 11:27:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.604 "name": "raid_bdev1", 00:18:11.604 "uuid": "22290a4e-ec5d-4909-b7d3-9355ec863c18", 00:18:11.604 "strip_size_kb": 64, 00:18:11.604 "state": "online", 00:18:11.604 "raid_level": "raid5f", 00:18:11.604 "superblock": false, 00:18:11.604 "num_base_bdevs": 4, 00:18:11.604 "num_base_bdevs_discovered": 4, 00:18:11.604 "num_base_bdevs_operational": 4, 00:18:11.604 "process": { 00:18:11.604 "type": "rebuild", 00:18:11.604 "target": "spare", 00:18:11.604 "progress": { 00:18:11.604 "blocks": 86400, 00:18:11.604 "percent": 43 00:18:11.604 } 00:18:11.604 }, 00:18:11.604 "base_bdevs_list": [ 00:18:11.604 { 00:18:11.604 "name": "spare", 00:18:11.604 "uuid": "547211df-49e5-52a0-be33-7a83539eef6d", 00:18:11.604 "is_configured": true, 00:18:11.604 "data_offset": 0, 00:18:11.604 "data_size": 65536 00:18:11.604 }, 00:18:11.604 { 00:18:11.604 "name": "BaseBdev2", 00:18:11.604 "uuid": "26fc3072-a745-5a24-b662-a46897eb6208", 00:18:11.604 "is_configured": true, 00:18:11.604 "data_offset": 0, 00:18:11.604 "data_size": 65536 00:18:11.604 }, 00:18:11.604 { 00:18:11.604 "name": "BaseBdev3", 00:18:11.604 "uuid": "4b4d228c-bbda-5cd0-b7e9-fd478e8d4cde", 00:18:11.604 "is_configured": true, 00:18:11.604 "data_offset": 0, 00:18:11.604 "data_size": 65536 00:18:11.604 }, 00:18:11.604 { 00:18:11.604 "name": "BaseBdev4", 00:18:11.604 "uuid": "3c2e222a-86c1-5b80-a837-1c4deeb6225a", 00:18:11.604 "is_configured": true, 00:18:11.604 "data_offset": 0, 00:18:11.604 "data_size": 65536 00:18:11.604 } 00:18:11.604 ] 00:18:11.604 }' 00:18:11.604 11:27:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.604 11:27:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.604 11:27:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.604 11:27:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.604 11:27:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:12.982 11:27:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:12.982 11:27:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.982 11:27:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.982 11:27:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:12.982 11:27:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:12.982 11:27:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.982 11:27:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.982 11:27:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.982 11:27:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.982 11:27:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.982 11:27:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.982 11:27:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.982 "name": "raid_bdev1", 00:18:12.982 "uuid": "22290a4e-ec5d-4909-b7d3-9355ec863c18", 00:18:12.982 "strip_size_kb": 64, 00:18:12.982 "state": "online", 00:18:12.982 "raid_level": "raid5f", 00:18:12.982 "superblock": false, 00:18:12.982 "num_base_bdevs": 4, 00:18:12.983 "num_base_bdevs_discovered": 4, 00:18:12.983 "num_base_bdevs_operational": 4, 00:18:12.983 "process": { 00:18:12.983 "type": "rebuild", 00:18:12.983 "target": "spare", 00:18:12.983 "progress": { 00:18:12.983 "blocks": 109440, 00:18:12.983 "percent": 55 00:18:12.983 } 00:18:12.983 }, 00:18:12.983 "base_bdevs_list": [ 00:18:12.983 { 00:18:12.983 "name": "spare", 00:18:12.983 "uuid": "547211df-49e5-52a0-be33-7a83539eef6d", 00:18:12.983 "is_configured": true, 00:18:12.983 "data_offset": 0, 00:18:12.983 "data_size": 65536 00:18:12.983 }, 00:18:12.983 { 00:18:12.983 "name": "BaseBdev2", 00:18:12.983 "uuid": "26fc3072-a745-5a24-b662-a46897eb6208", 00:18:12.983 "is_configured": true, 00:18:12.983 "data_offset": 0, 00:18:12.983 "data_size": 65536 00:18:12.983 }, 00:18:12.983 { 00:18:12.983 "name": "BaseBdev3", 00:18:12.983 "uuid": "4b4d228c-bbda-5cd0-b7e9-fd478e8d4cde", 00:18:12.983 "is_configured": true, 00:18:12.983 "data_offset": 0, 00:18:12.983 "data_size": 65536 00:18:12.983 }, 00:18:12.983 { 00:18:12.983 "name": "BaseBdev4", 00:18:12.983 "uuid": "3c2e222a-86c1-5b80-a837-1c4deeb6225a", 00:18:12.983 "is_configured": true, 00:18:12.983 "data_offset": 0, 00:18:12.983 "data_size": 65536 00:18:12.983 } 00:18:12.983 ] 00:18:12.983 }' 00:18:12.983 11:27:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.983 11:27:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:12.983 11:27:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.983 11:27:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.983 11:27:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:13.923 11:27:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:13.923 11:27:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.923 11:27:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.923 11:27:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.923 11:27:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.923 11:27:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.923 11:27:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.923 11:27:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.923 11:27:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.923 11:27:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.923 11:27:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.923 11:27:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.923 "name": "raid_bdev1", 00:18:13.923 "uuid": "22290a4e-ec5d-4909-b7d3-9355ec863c18", 00:18:13.923 "strip_size_kb": 64, 00:18:13.923 "state": "online", 00:18:13.923 "raid_level": "raid5f", 00:18:13.923 "superblock": false, 00:18:13.923 "num_base_bdevs": 4, 00:18:13.923 "num_base_bdevs_discovered": 4, 00:18:13.923 "num_base_bdevs_operational": 4, 00:18:13.923 "process": { 00:18:13.923 "type": "rebuild", 00:18:13.923 "target": "spare", 00:18:13.923 "progress": { 00:18:13.923 "blocks": 130560, 00:18:13.923 "percent": 66 00:18:13.923 } 00:18:13.923 }, 00:18:13.923 "base_bdevs_list": [ 00:18:13.923 { 00:18:13.923 "name": "spare", 00:18:13.923 "uuid": "547211df-49e5-52a0-be33-7a83539eef6d", 00:18:13.923 "is_configured": true, 00:18:13.923 "data_offset": 0, 00:18:13.923 "data_size": 65536 00:18:13.923 }, 00:18:13.923 { 00:18:13.923 "name": "BaseBdev2", 00:18:13.923 "uuid": "26fc3072-a745-5a24-b662-a46897eb6208", 00:18:13.923 "is_configured": true, 00:18:13.923 "data_offset": 0, 00:18:13.923 "data_size": 65536 00:18:13.923 }, 00:18:13.923 { 00:18:13.923 "name": "BaseBdev3", 00:18:13.923 "uuid": "4b4d228c-bbda-5cd0-b7e9-fd478e8d4cde", 00:18:13.923 "is_configured": true, 00:18:13.923 "data_offset": 0, 00:18:13.923 "data_size": 65536 00:18:13.923 }, 00:18:13.923 { 00:18:13.923 "name": "BaseBdev4", 00:18:13.924 "uuid": "3c2e222a-86c1-5b80-a837-1c4deeb6225a", 00:18:13.924 "is_configured": true, 00:18:13.924 "data_offset": 0, 00:18:13.924 "data_size": 65536 00:18:13.924 } 00:18:13.924 ] 00:18:13.924 }' 00:18:13.924 11:27:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.924 11:27:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:13.924 11:27:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.924 11:27:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.924 11:27:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:15.305 11:27:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:15.305 11:27:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.305 11:27:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.305 11:27:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.305 11:27:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.305 11:27:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.305 11:27:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.305 11:27:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.305 11:27:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.305 11:27:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.305 11:27:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.305 11:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.305 "name": "raid_bdev1", 00:18:15.305 "uuid": "22290a4e-ec5d-4909-b7d3-9355ec863c18", 00:18:15.305 "strip_size_kb": 64, 00:18:15.305 "state": "online", 00:18:15.305 "raid_level": "raid5f", 00:18:15.305 "superblock": false, 00:18:15.305 "num_base_bdevs": 4, 00:18:15.305 "num_base_bdevs_discovered": 4, 00:18:15.305 "num_base_bdevs_operational": 4, 00:18:15.305 "process": { 00:18:15.305 "type": "rebuild", 00:18:15.306 "target": "spare", 00:18:15.306 "progress": { 00:18:15.306 "blocks": 153600, 00:18:15.306 "percent": 78 00:18:15.306 } 00:18:15.306 }, 00:18:15.306 "base_bdevs_list": [ 00:18:15.306 { 00:18:15.306 "name": "spare", 00:18:15.306 "uuid": "547211df-49e5-52a0-be33-7a83539eef6d", 00:18:15.306 "is_configured": true, 00:18:15.306 "data_offset": 0, 00:18:15.306 "data_size": 65536 00:18:15.306 }, 00:18:15.306 { 00:18:15.306 "name": "BaseBdev2", 00:18:15.306 "uuid": "26fc3072-a745-5a24-b662-a46897eb6208", 00:18:15.306 "is_configured": true, 00:18:15.306 "data_offset": 0, 00:18:15.306 "data_size": 65536 00:18:15.306 }, 00:18:15.306 { 00:18:15.306 "name": "BaseBdev3", 00:18:15.306 "uuid": "4b4d228c-bbda-5cd0-b7e9-fd478e8d4cde", 00:18:15.306 "is_configured": true, 00:18:15.306 "data_offset": 0, 00:18:15.306 "data_size": 65536 00:18:15.306 }, 00:18:15.306 { 00:18:15.306 "name": "BaseBdev4", 00:18:15.306 "uuid": "3c2e222a-86c1-5b80-a837-1c4deeb6225a", 00:18:15.306 "is_configured": true, 00:18:15.306 "data_offset": 0, 00:18:15.306 "data_size": 65536 00:18:15.306 } 00:18:15.306 ] 00:18:15.306 }' 00:18:15.306 11:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.306 11:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:15.306 11:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.306 11:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:15.306 11:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:16.245 11:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:16.245 11:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.245 11:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.245 11:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:16.245 11:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:16.245 11:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.245 11:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.245 11:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.245 11:27:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.245 11:27:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.245 11:27:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.245 11:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.245 "name": "raid_bdev1", 00:18:16.245 "uuid": "22290a4e-ec5d-4909-b7d3-9355ec863c18", 00:18:16.245 "strip_size_kb": 64, 00:18:16.245 "state": "online", 00:18:16.245 "raid_level": "raid5f", 00:18:16.245 "superblock": false, 00:18:16.245 "num_base_bdevs": 4, 00:18:16.245 "num_base_bdevs_discovered": 4, 00:18:16.245 "num_base_bdevs_operational": 4, 00:18:16.245 "process": { 00:18:16.245 "type": "rebuild", 00:18:16.245 "target": "spare", 00:18:16.245 "progress": { 00:18:16.245 "blocks": 174720, 00:18:16.245 "percent": 88 00:18:16.245 } 00:18:16.245 }, 00:18:16.245 "base_bdevs_list": [ 00:18:16.245 { 00:18:16.245 "name": "spare", 00:18:16.245 "uuid": "547211df-49e5-52a0-be33-7a83539eef6d", 00:18:16.245 "is_configured": true, 00:18:16.245 "data_offset": 0, 00:18:16.245 "data_size": 65536 00:18:16.245 }, 00:18:16.245 { 00:18:16.245 "name": "BaseBdev2", 00:18:16.245 "uuid": "26fc3072-a745-5a24-b662-a46897eb6208", 00:18:16.245 "is_configured": true, 00:18:16.245 "data_offset": 0, 00:18:16.245 "data_size": 65536 00:18:16.245 }, 00:18:16.245 { 00:18:16.245 "name": "BaseBdev3", 00:18:16.245 "uuid": "4b4d228c-bbda-5cd0-b7e9-fd478e8d4cde", 00:18:16.245 "is_configured": true, 00:18:16.245 "data_offset": 0, 00:18:16.245 "data_size": 65536 00:18:16.245 }, 00:18:16.245 { 00:18:16.245 "name": "BaseBdev4", 00:18:16.245 "uuid": "3c2e222a-86c1-5b80-a837-1c4deeb6225a", 00:18:16.245 "is_configured": true, 00:18:16.245 "data_offset": 0, 00:18:16.245 "data_size": 65536 00:18:16.245 } 00:18:16.245 ] 00:18:16.245 }' 00:18:16.245 11:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.245 11:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:16.245 11:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.245 11:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.245 11:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:17.182 11:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:17.182 11:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:17.182 11:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.182 11:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:17.182 11:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:17.182 11:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.182 11:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.182 11:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.182 11:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.182 11:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.441 11:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.441 [2024-11-20 11:28:00.313296] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:17.441 [2024-11-20 11:28:00.313413] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:17.441 [2024-11-20 11:28:00.313508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.441 11:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.441 "name": "raid_bdev1", 00:18:17.441 "uuid": "22290a4e-ec5d-4909-b7d3-9355ec863c18", 00:18:17.441 "strip_size_kb": 64, 00:18:17.441 "state": "online", 00:18:17.441 "raid_level": "raid5f", 00:18:17.441 "superblock": false, 00:18:17.441 "num_base_bdevs": 4, 00:18:17.441 "num_base_bdevs_discovered": 4, 00:18:17.441 "num_base_bdevs_operational": 4, 00:18:17.441 "process": { 00:18:17.441 "type": "rebuild", 00:18:17.441 "target": "spare", 00:18:17.441 "progress": { 00:18:17.441 "blocks": 195840, 00:18:17.441 "percent": 99 00:18:17.441 } 00:18:17.441 }, 00:18:17.441 "base_bdevs_list": [ 00:18:17.441 { 00:18:17.441 "name": "spare", 00:18:17.441 "uuid": "547211df-49e5-52a0-be33-7a83539eef6d", 00:18:17.441 "is_configured": true, 00:18:17.441 "data_offset": 0, 00:18:17.441 "data_size": 65536 00:18:17.441 }, 00:18:17.441 { 00:18:17.441 "name": "BaseBdev2", 00:18:17.441 "uuid": "26fc3072-a745-5a24-b662-a46897eb6208", 00:18:17.441 "is_configured": true, 00:18:17.441 "data_offset": 0, 00:18:17.441 "data_size": 65536 00:18:17.441 }, 00:18:17.441 { 00:18:17.441 "name": "BaseBdev3", 00:18:17.441 "uuid": "4b4d228c-bbda-5cd0-b7e9-fd478e8d4cde", 00:18:17.441 "is_configured": true, 00:18:17.441 "data_offset": 0, 00:18:17.441 "data_size": 65536 00:18:17.441 }, 00:18:17.441 { 00:18:17.441 "name": "BaseBdev4", 00:18:17.441 "uuid": "3c2e222a-86c1-5b80-a837-1c4deeb6225a", 00:18:17.441 "is_configured": true, 00:18:17.441 "data_offset": 0, 00:18:17.441 "data_size": 65536 00:18:17.441 } 00:18:17.441 ] 00:18:17.441 }' 00:18:17.441 11:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.441 11:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:17.441 11:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.441 11:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:17.441 11:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:18.452 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:18.452 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.452 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.452 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.452 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.452 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.452 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.452 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.452 11:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.452 11:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.452 11:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.452 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.452 "name": "raid_bdev1", 00:18:18.452 "uuid": "22290a4e-ec5d-4909-b7d3-9355ec863c18", 00:18:18.452 "strip_size_kb": 64, 00:18:18.452 "state": "online", 00:18:18.452 "raid_level": "raid5f", 00:18:18.452 "superblock": false, 00:18:18.452 "num_base_bdevs": 4, 00:18:18.453 "num_base_bdevs_discovered": 4, 00:18:18.453 "num_base_bdevs_operational": 4, 00:18:18.453 "base_bdevs_list": [ 00:18:18.453 { 00:18:18.453 "name": "spare", 00:18:18.453 "uuid": "547211df-49e5-52a0-be33-7a83539eef6d", 00:18:18.453 "is_configured": true, 00:18:18.453 "data_offset": 0, 00:18:18.453 "data_size": 65536 00:18:18.453 }, 00:18:18.453 { 00:18:18.453 "name": "BaseBdev2", 00:18:18.453 "uuid": "26fc3072-a745-5a24-b662-a46897eb6208", 00:18:18.453 "is_configured": true, 00:18:18.453 "data_offset": 0, 00:18:18.453 "data_size": 65536 00:18:18.453 }, 00:18:18.453 { 00:18:18.453 "name": "BaseBdev3", 00:18:18.453 "uuid": "4b4d228c-bbda-5cd0-b7e9-fd478e8d4cde", 00:18:18.453 "is_configured": true, 00:18:18.453 "data_offset": 0, 00:18:18.453 "data_size": 65536 00:18:18.453 }, 00:18:18.453 { 00:18:18.453 "name": "BaseBdev4", 00:18:18.453 "uuid": "3c2e222a-86c1-5b80-a837-1c4deeb6225a", 00:18:18.453 "is_configured": true, 00:18:18.453 "data_offset": 0, 00:18:18.453 "data_size": 65536 00:18:18.453 } 00:18:18.453 ] 00:18:18.453 }' 00:18:18.453 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.453 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:18.453 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.453 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:18.453 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:18.453 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:18.453 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.453 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:18.453 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:18.453 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.453 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.453 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.453 11:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.453 11:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.710 11:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.710 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.710 "name": "raid_bdev1", 00:18:18.710 "uuid": "22290a4e-ec5d-4909-b7d3-9355ec863c18", 00:18:18.710 "strip_size_kb": 64, 00:18:18.710 "state": "online", 00:18:18.710 "raid_level": "raid5f", 00:18:18.710 "superblock": false, 00:18:18.710 "num_base_bdevs": 4, 00:18:18.710 "num_base_bdevs_discovered": 4, 00:18:18.710 "num_base_bdevs_operational": 4, 00:18:18.710 "base_bdevs_list": [ 00:18:18.710 { 00:18:18.710 "name": "spare", 00:18:18.710 "uuid": "547211df-49e5-52a0-be33-7a83539eef6d", 00:18:18.710 "is_configured": true, 00:18:18.710 "data_offset": 0, 00:18:18.711 "data_size": 65536 00:18:18.711 }, 00:18:18.711 { 00:18:18.711 "name": "BaseBdev2", 00:18:18.711 "uuid": "26fc3072-a745-5a24-b662-a46897eb6208", 00:18:18.711 "is_configured": true, 00:18:18.711 "data_offset": 0, 00:18:18.711 "data_size": 65536 00:18:18.711 }, 00:18:18.711 { 00:18:18.711 "name": "BaseBdev3", 00:18:18.711 "uuid": "4b4d228c-bbda-5cd0-b7e9-fd478e8d4cde", 00:18:18.711 "is_configured": true, 00:18:18.711 "data_offset": 0, 00:18:18.711 "data_size": 65536 00:18:18.711 }, 00:18:18.711 { 00:18:18.711 "name": "BaseBdev4", 00:18:18.711 "uuid": "3c2e222a-86c1-5b80-a837-1c4deeb6225a", 00:18:18.711 "is_configured": true, 00:18:18.711 "data_offset": 0, 00:18:18.711 "data_size": 65536 00:18:18.711 } 00:18:18.711 ] 00:18:18.711 }' 00:18:18.711 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.711 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:18.711 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.711 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:18.711 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:18.711 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.711 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.711 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:18.711 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:18.711 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:18.711 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.711 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.711 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.711 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.711 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.711 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.711 11:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.711 11:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.711 11:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.711 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.711 "name": "raid_bdev1", 00:18:18.711 "uuid": "22290a4e-ec5d-4909-b7d3-9355ec863c18", 00:18:18.711 "strip_size_kb": 64, 00:18:18.711 "state": "online", 00:18:18.711 "raid_level": "raid5f", 00:18:18.711 "superblock": false, 00:18:18.711 "num_base_bdevs": 4, 00:18:18.711 "num_base_bdevs_discovered": 4, 00:18:18.711 "num_base_bdevs_operational": 4, 00:18:18.711 "base_bdevs_list": [ 00:18:18.711 { 00:18:18.711 "name": "spare", 00:18:18.711 "uuid": "547211df-49e5-52a0-be33-7a83539eef6d", 00:18:18.711 "is_configured": true, 00:18:18.711 "data_offset": 0, 00:18:18.711 "data_size": 65536 00:18:18.711 }, 00:18:18.711 { 00:18:18.711 "name": "BaseBdev2", 00:18:18.711 "uuid": "26fc3072-a745-5a24-b662-a46897eb6208", 00:18:18.711 "is_configured": true, 00:18:18.711 "data_offset": 0, 00:18:18.711 "data_size": 65536 00:18:18.711 }, 00:18:18.711 { 00:18:18.711 "name": "BaseBdev3", 00:18:18.711 "uuid": "4b4d228c-bbda-5cd0-b7e9-fd478e8d4cde", 00:18:18.711 "is_configured": true, 00:18:18.711 "data_offset": 0, 00:18:18.711 "data_size": 65536 00:18:18.711 }, 00:18:18.711 { 00:18:18.711 "name": "BaseBdev4", 00:18:18.711 "uuid": "3c2e222a-86c1-5b80-a837-1c4deeb6225a", 00:18:18.711 "is_configured": true, 00:18:18.711 "data_offset": 0, 00:18:18.711 "data_size": 65536 00:18:18.711 } 00:18:18.711 ] 00:18:18.711 }' 00:18:18.711 11:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.711 11:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.277 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:19.277 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.277 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.277 [2024-11-20 11:28:02.140746] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:19.277 [2024-11-20 11:28:02.140851] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:19.277 [2024-11-20 11:28:02.140982] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:19.277 [2024-11-20 11:28:02.141142] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:19.277 [2024-11-20 11:28:02.141214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:19.277 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.277 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.277 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.277 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:19.277 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.277 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.277 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:19.277 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:19.277 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:19.277 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:19.277 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:19.277 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:19.277 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:19.277 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:19.277 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:19.277 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:19.277 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:19.277 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:19.277 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:19.535 /dev/nbd0 00:18:19.535 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:19.535 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:19.535 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:19.535 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:19.535 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:19.535 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:19.535 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:19.535 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:19.535 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:19.535 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:19.535 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:19.535 1+0 records in 00:18:19.535 1+0 records out 00:18:19.535 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390736 s, 10.5 MB/s 00:18:19.535 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.535 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:19.535 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.535 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:19.535 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:19.535 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:19.535 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:19.535 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:19.793 /dev/nbd1 00:18:19.793 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:19.793 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:19.793 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:19.793 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:19.793 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:19.793 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:19.793 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:19.793 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:19.793 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:19.793 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:19.793 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:19.793 1+0 records in 00:18:19.793 1+0 records out 00:18:19.793 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584941 s, 7.0 MB/s 00:18:19.793 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.793 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:19.793 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.793 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:19.793 11:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:19.793 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:19.793 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:19.793 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:20.049 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:20.049 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:20.049 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:20.050 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:20.050 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:20.050 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:20.050 11:28:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:20.307 11:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:20.307 11:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:20.307 11:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:20.307 11:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:20.307 11:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:20.307 11:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:20.307 11:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:20.307 11:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:20.307 11:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:20.307 11:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:20.565 11:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:20.565 11:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:20.565 11:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:20.565 11:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:20.565 11:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:20.565 11:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:20.565 11:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:20.565 11:28:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:20.565 11:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:20.565 11:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84817 00:18:20.565 11:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84817 ']' 00:18:20.565 11:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84817 00:18:20.565 11:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:18:20.565 11:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.565 11:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84817 00:18:20.565 killing process with pid 84817 00:18:20.565 Received shutdown signal, test time was about 60.000000 seconds 00:18:20.565 00:18:20.565 Latency(us) 00:18:20.565 [2024-11-20T11:28:03.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.565 [2024-11-20T11:28:03.681Z] =================================================================================================================== 00:18:20.565 [2024-11-20T11:28:03.681Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:20.565 11:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:20.565 11:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:20.565 11:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84817' 00:18:20.565 11:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84817 00:18:20.565 [2024-11-20 11:28:03.549124] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:20.565 11:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84817 00:18:21.130 [2024-11-20 11:28:04.079835] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:22.511 11:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:22.511 00:18:22.511 real 0m20.597s 00:18:22.511 user 0m24.741s 00:18:22.511 sys 0m2.288s 00:18:22.511 11:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.511 ************************************ 00:18:22.511 END TEST raid5f_rebuild_test 00:18:22.511 ************************************ 00:18:22.511 11:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.511 11:28:05 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:18:22.511 11:28:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:22.512 11:28:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:22.512 11:28:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.512 ************************************ 00:18:22.512 START TEST raid5f_rebuild_test_sb 00:18:22.512 ************************************ 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85340 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85340 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85340 ']' 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.512 11:28:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.512 [2024-11-20 11:28:05.425717] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:18:22.512 [2024-11-20 11:28:05.425932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:22.512 Zero copy mechanism will not be used. 00:18:22.512 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85340 ] 00:18:22.512 [2024-11-20 11:28:05.603071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.778 [2024-11-20 11:28:05.718808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.038 [2024-11-20 11:28:05.916403] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:23.038 [2024-11-20 11:28:05.916569] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:23.298 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.298 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:23.298 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:23.298 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:23.298 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.298 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.298 BaseBdev1_malloc 00:18:23.298 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.298 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:23.298 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.298 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.298 [2024-11-20 11:28:06.332060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:23.298 [2024-11-20 11:28:06.332198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.298 [2024-11-20 11:28:06.332251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:23.298 [2024-11-20 11:28:06.332300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.298 [2024-11-20 11:28:06.334598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.298 [2024-11-20 11:28:06.334671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:23.298 BaseBdev1 00:18:23.298 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.298 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:23.298 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:23.298 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.298 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.298 BaseBdev2_malloc 00:18:23.298 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.298 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:23.298 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.298 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.298 [2024-11-20 11:28:06.387600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:23.298 [2024-11-20 11:28:06.387724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.298 [2024-11-20 11:28:06.387766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:23.298 [2024-11-20 11:28:06.387822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.298 [2024-11-20 11:28:06.390212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.298 [2024-11-20 11:28:06.390298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:23.298 BaseBdev2 00:18:23.298 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.298 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:23.298 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:23.298 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.298 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.559 BaseBdev3_malloc 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.559 [2024-11-20 11:28:06.458825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:23.559 [2024-11-20 11:28:06.458955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.559 [2024-11-20 11:28:06.458997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:23.559 [2024-11-20 11:28:06.459031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.559 [2024-11-20 11:28:06.461135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.559 [2024-11-20 11:28:06.461217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:23.559 BaseBdev3 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.559 BaseBdev4_malloc 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.559 [2024-11-20 11:28:06.513001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:23.559 [2024-11-20 11:28:06.513105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.559 [2024-11-20 11:28:06.513142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:23.559 [2024-11-20 11:28:06.513170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.559 [2024-11-20 11:28:06.515216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.559 [2024-11-20 11:28:06.515293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:23.559 BaseBdev4 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.559 spare_malloc 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.559 spare_delay 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.559 [2024-11-20 11:28:06.578706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:23.559 [2024-11-20 11:28:06.578819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.559 [2024-11-20 11:28:06.578860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:23.559 [2024-11-20 11:28:06.578892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.559 [2024-11-20 11:28:06.581072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.559 [2024-11-20 11:28:06.581150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:23.559 spare 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.559 [2024-11-20 11:28:06.590738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:23.559 [2024-11-20 11:28:06.592655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:23.559 [2024-11-20 11:28:06.592793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:23.559 [2024-11-20 11:28:06.592877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:23.559 [2024-11-20 11:28:06.593128] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:23.559 [2024-11-20 11:28:06.593190] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:23.559 [2024-11-20 11:28:06.593492] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:23.559 [2024-11-20 11:28:06.601644] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:23.559 [2024-11-20 11:28:06.601711] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:23.559 [2024-11-20 11:28:06.602005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.559 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.559 "name": "raid_bdev1", 00:18:23.559 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:23.559 "strip_size_kb": 64, 00:18:23.559 "state": "online", 00:18:23.559 "raid_level": "raid5f", 00:18:23.559 "superblock": true, 00:18:23.559 "num_base_bdevs": 4, 00:18:23.559 "num_base_bdevs_discovered": 4, 00:18:23.559 "num_base_bdevs_operational": 4, 00:18:23.559 "base_bdevs_list": [ 00:18:23.559 { 00:18:23.559 "name": "BaseBdev1", 00:18:23.559 "uuid": "cc8b8dc6-ae9f-5811-9e5b-5c0ee8248de9", 00:18:23.559 "is_configured": true, 00:18:23.559 "data_offset": 2048, 00:18:23.559 "data_size": 63488 00:18:23.559 }, 00:18:23.559 { 00:18:23.559 "name": "BaseBdev2", 00:18:23.559 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:23.559 "is_configured": true, 00:18:23.559 "data_offset": 2048, 00:18:23.559 "data_size": 63488 00:18:23.559 }, 00:18:23.559 { 00:18:23.559 "name": "BaseBdev3", 00:18:23.559 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:23.559 "is_configured": true, 00:18:23.559 "data_offset": 2048, 00:18:23.559 "data_size": 63488 00:18:23.559 }, 00:18:23.559 { 00:18:23.559 "name": "BaseBdev4", 00:18:23.559 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:23.559 "is_configured": true, 00:18:23.559 "data_offset": 2048, 00:18:23.559 "data_size": 63488 00:18:23.559 } 00:18:23.559 ] 00:18:23.559 }' 00:18:23.560 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.560 11:28:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.139 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:24.139 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:24.139 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.139 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.139 [2024-11-20 11:28:07.034814] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:24.139 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.139 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:18:24.139 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.139 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:24.139 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.139 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.139 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.139 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:24.139 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:24.139 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:24.139 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:24.139 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:24.139 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:24.139 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:24.139 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:24.139 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:24.139 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:24.139 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:24.139 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:24.139 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:24.139 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:24.398 [2024-11-20 11:28:07.302244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:24.398 /dev/nbd0 00:18:24.398 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:24.398 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:24.398 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:24.398 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:24.398 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:24.398 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:24.398 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:24.398 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:24.398 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:24.398 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:24.398 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:24.398 1+0 records in 00:18:24.398 1+0 records out 00:18:24.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566136 s, 7.2 MB/s 00:18:24.398 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.398 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:24.398 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.399 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:24.399 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:24.399 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:24.399 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:24.399 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:24.399 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:24.399 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:24.399 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:18:24.967 496+0 records in 00:18:24.967 496+0 records out 00:18:24.967 97517568 bytes (98 MB, 93 MiB) copied, 0.452687 s, 215 MB/s 00:18:24.967 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:24.967 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:24.967 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:24.967 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:24.967 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:24.967 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:24.967 11:28:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:25.227 [2024-11-20 11:28:08.102923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.227 [2024-11-20 11:28:08.117808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.227 "name": "raid_bdev1", 00:18:25.227 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:25.227 "strip_size_kb": 64, 00:18:25.227 "state": "online", 00:18:25.227 "raid_level": "raid5f", 00:18:25.227 "superblock": true, 00:18:25.227 "num_base_bdevs": 4, 00:18:25.227 "num_base_bdevs_discovered": 3, 00:18:25.227 "num_base_bdevs_operational": 3, 00:18:25.227 "base_bdevs_list": [ 00:18:25.227 { 00:18:25.227 "name": null, 00:18:25.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.227 "is_configured": false, 00:18:25.227 "data_offset": 0, 00:18:25.227 "data_size": 63488 00:18:25.227 }, 00:18:25.227 { 00:18:25.227 "name": "BaseBdev2", 00:18:25.227 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:25.227 "is_configured": true, 00:18:25.227 "data_offset": 2048, 00:18:25.227 "data_size": 63488 00:18:25.227 }, 00:18:25.227 { 00:18:25.227 "name": "BaseBdev3", 00:18:25.227 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:25.227 "is_configured": true, 00:18:25.227 "data_offset": 2048, 00:18:25.227 "data_size": 63488 00:18:25.227 }, 00:18:25.227 { 00:18:25.227 "name": "BaseBdev4", 00:18:25.227 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:25.227 "is_configured": true, 00:18:25.227 "data_offset": 2048, 00:18:25.227 "data_size": 63488 00:18:25.227 } 00:18:25.227 ] 00:18:25.227 }' 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.227 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.486 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:25.486 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.486 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.486 [2024-11-20 11:28:08.573053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:25.486 [2024-11-20 11:28:08.589764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:18:25.486 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.486 11:28:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:25.486 [2024-11-20 11:28:08.599230] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.922 "name": "raid_bdev1", 00:18:26.922 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:26.922 "strip_size_kb": 64, 00:18:26.922 "state": "online", 00:18:26.922 "raid_level": "raid5f", 00:18:26.922 "superblock": true, 00:18:26.922 "num_base_bdevs": 4, 00:18:26.922 "num_base_bdevs_discovered": 4, 00:18:26.922 "num_base_bdevs_operational": 4, 00:18:26.922 "process": { 00:18:26.922 "type": "rebuild", 00:18:26.922 "target": "spare", 00:18:26.922 "progress": { 00:18:26.922 "blocks": 17280, 00:18:26.922 "percent": 9 00:18:26.922 } 00:18:26.922 }, 00:18:26.922 "base_bdevs_list": [ 00:18:26.922 { 00:18:26.922 "name": "spare", 00:18:26.922 "uuid": "a6535f5b-24eb-534e-a7db-960cd0b414a2", 00:18:26.922 "is_configured": true, 00:18:26.922 "data_offset": 2048, 00:18:26.922 "data_size": 63488 00:18:26.922 }, 00:18:26.922 { 00:18:26.922 "name": "BaseBdev2", 00:18:26.922 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:26.922 "is_configured": true, 00:18:26.922 "data_offset": 2048, 00:18:26.922 "data_size": 63488 00:18:26.922 }, 00:18:26.922 { 00:18:26.922 "name": "BaseBdev3", 00:18:26.922 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:26.922 "is_configured": true, 00:18:26.922 "data_offset": 2048, 00:18:26.922 "data_size": 63488 00:18:26.922 }, 00:18:26.922 { 00:18:26.922 "name": "BaseBdev4", 00:18:26.922 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:26.922 "is_configured": true, 00:18:26.922 "data_offset": 2048, 00:18:26.922 "data_size": 63488 00:18:26.922 } 00:18:26.922 ] 00:18:26.922 }' 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.922 [2024-11-20 11:28:09.754070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:26.922 [2024-11-20 11:28:09.808048] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:26.922 [2024-11-20 11:28:09.808135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.922 [2024-11-20 11:28:09.808153] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:26.922 [2024-11-20 11:28:09.808163] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.922 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.922 "name": "raid_bdev1", 00:18:26.922 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:26.922 "strip_size_kb": 64, 00:18:26.922 "state": "online", 00:18:26.922 "raid_level": "raid5f", 00:18:26.922 "superblock": true, 00:18:26.922 "num_base_bdevs": 4, 00:18:26.922 "num_base_bdevs_discovered": 3, 00:18:26.922 "num_base_bdevs_operational": 3, 00:18:26.922 "base_bdevs_list": [ 00:18:26.922 { 00:18:26.922 "name": null, 00:18:26.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.922 "is_configured": false, 00:18:26.922 "data_offset": 0, 00:18:26.922 "data_size": 63488 00:18:26.922 }, 00:18:26.922 { 00:18:26.922 "name": "BaseBdev2", 00:18:26.922 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:26.922 "is_configured": true, 00:18:26.922 "data_offset": 2048, 00:18:26.922 "data_size": 63488 00:18:26.922 }, 00:18:26.922 { 00:18:26.922 "name": "BaseBdev3", 00:18:26.923 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:26.923 "is_configured": true, 00:18:26.923 "data_offset": 2048, 00:18:26.923 "data_size": 63488 00:18:26.923 }, 00:18:26.923 { 00:18:26.923 "name": "BaseBdev4", 00:18:26.923 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:26.923 "is_configured": true, 00:18:26.923 "data_offset": 2048, 00:18:26.923 "data_size": 63488 00:18:26.923 } 00:18:26.923 ] 00:18:26.923 }' 00:18:26.923 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.923 11:28:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.182 11:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:27.182 11:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.182 11:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:27.182 11:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:27.182 11:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.182 11:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.182 11:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.182 11:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.182 11:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.182 11:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.443 11:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.443 "name": "raid_bdev1", 00:18:27.443 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:27.443 "strip_size_kb": 64, 00:18:27.443 "state": "online", 00:18:27.443 "raid_level": "raid5f", 00:18:27.443 "superblock": true, 00:18:27.443 "num_base_bdevs": 4, 00:18:27.443 "num_base_bdevs_discovered": 3, 00:18:27.443 "num_base_bdevs_operational": 3, 00:18:27.443 "base_bdevs_list": [ 00:18:27.443 { 00:18:27.443 "name": null, 00:18:27.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.443 "is_configured": false, 00:18:27.443 "data_offset": 0, 00:18:27.443 "data_size": 63488 00:18:27.443 }, 00:18:27.443 { 00:18:27.443 "name": "BaseBdev2", 00:18:27.443 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:27.443 "is_configured": true, 00:18:27.443 "data_offset": 2048, 00:18:27.443 "data_size": 63488 00:18:27.443 }, 00:18:27.443 { 00:18:27.443 "name": "BaseBdev3", 00:18:27.443 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:27.443 "is_configured": true, 00:18:27.443 "data_offset": 2048, 00:18:27.443 "data_size": 63488 00:18:27.443 }, 00:18:27.443 { 00:18:27.443 "name": "BaseBdev4", 00:18:27.443 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:27.443 "is_configured": true, 00:18:27.443 "data_offset": 2048, 00:18:27.443 "data_size": 63488 00:18:27.443 } 00:18:27.443 ] 00:18:27.443 }' 00:18:27.443 11:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.443 11:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:27.443 11:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.443 11:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:27.443 11:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:27.443 11:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.443 11:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.443 [2024-11-20 11:28:10.425635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:27.443 [2024-11-20 11:28:10.442398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:18:27.443 11:28:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.443 11:28:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:27.443 [2024-11-20 11:28:10.452569] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:28.381 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:28.381 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.381 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:28.381 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:28.381 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.381 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.381 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.381 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.381 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.381 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.641 "name": "raid_bdev1", 00:18:28.641 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:28.641 "strip_size_kb": 64, 00:18:28.641 "state": "online", 00:18:28.641 "raid_level": "raid5f", 00:18:28.641 "superblock": true, 00:18:28.641 "num_base_bdevs": 4, 00:18:28.641 "num_base_bdevs_discovered": 4, 00:18:28.641 "num_base_bdevs_operational": 4, 00:18:28.641 "process": { 00:18:28.641 "type": "rebuild", 00:18:28.641 "target": "spare", 00:18:28.641 "progress": { 00:18:28.641 "blocks": 19200, 00:18:28.641 "percent": 10 00:18:28.641 } 00:18:28.641 }, 00:18:28.641 "base_bdevs_list": [ 00:18:28.641 { 00:18:28.641 "name": "spare", 00:18:28.641 "uuid": "a6535f5b-24eb-534e-a7db-960cd0b414a2", 00:18:28.641 "is_configured": true, 00:18:28.641 "data_offset": 2048, 00:18:28.641 "data_size": 63488 00:18:28.641 }, 00:18:28.641 { 00:18:28.641 "name": "BaseBdev2", 00:18:28.641 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:28.641 "is_configured": true, 00:18:28.641 "data_offset": 2048, 00:18:28.641 "data_size": 63488 00:18:28.641 }, 00:18:28.641 { 00:18:28.641 "name": "BaseBdev3", 00:18:28.641 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:28.641 "is_configured": true, 00:18:28.641 "data_offset": 2048, 00:18:28.641 "data_size": 63488 00:18:28.641 }, 00:18:28.641 { 00:18:28.641 "name": "BaseBdev4", 00:18:28.641 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:28.641 "is_configured": true, 00:18:28.641 "data_offset": 2048, 00:18:28.641 "data_size": 63488 00:18:28.641 } 00:18:28.641 ] 00:18:28.641 }' 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:28.641 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=657 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.641 "name": "raid_bdev1", 00:18:28.641 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:28.641 "strip_size_kb": 64, 00:18:28.641 "state": "online", 00:18:28.641 "raid_level": "raid5f", 00:18:28.641 "superblock": true, 00:18:28.641 "num_base_bdevs": 4, 00:18:28.641 "num_base_bdevs_discovered": 4, 00:18:28.641 "num_base_bdevs_operational": 4, 00:18:28.641 "process": { 00:18:28.641 "type": "rebuild", 00:18:28.641 "target": "spare", 00:18:28.641 "progress": { 00:18:28.641 "blocks": 21120, 00:18:28.641 "percent": 11 00:18:28.641 } 00:18:28.641 }, 00:18:28.641 "base_bdevs_list": [ 00:18:28.641 { 00:18:28.641 "name": "spare", 00:18:28.641 "uuid": "a6535f5b-24eb-534e-a7db-960cd0b414a2", 00:18:28.641 "is_configured": true, 00:18:28.641 "data_offset": 2048, 00:18:28.641 "data_size": 63488 00:18:28.641 }, 00:18:28.641 { 00:18:28.641 "name": "BaseBdev2", 00:18:28.641 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:28.641 "is_configured": true, 00:18:28.641 "data_offset": 2048, 00:18:28.641 "data_size": 63488 00:18:28.641 }, 00:18:28.641 { 00:18:28.641 "name": "BaseBdev3", 00:18:28.641 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:28.641 "is_configured": true, 00:18:28.641 "data_offset": 2048, 00:18:28.641 "data_size": 63488 00:18:28.641 }, 00:18:28.641 { 00:18:28.641 "name": "BaseBdev4", 00:18:28.641 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:28.641 "is_configured": true, 00:18:28.641 "data_offset": 2048, 00:18:28.641 "data_size": 63488 00:18:28.641 } 00:18:28.641 ] 00:18:28.641 }' 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:28.641 11:28:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:30.019 11:28:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:30.019 11:28:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.020 11:28:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.020 11:28:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:30.020 11:28:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:30.020 11:28:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.020 11:28:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.020 11:28:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.020 11:28:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.020 11:28:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.020 11:28:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.020 11:28:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.020 "name": "raid_bdev1", 00:18:30.020 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:30.020 "strip_size_kb": 64, 00:18:30.020 "state": "online", 00:18:30.020 "raid_level": "raid5f", 00:18:30.020 "superblock": true, 00:18:30.020 "num_base_bdevs": 4, 00:18:30.020 "num_base_bdevs_discovered": 4, 00:18:30.020 "num_base_bdevs_operational": 4, 00:18:30.020 "process": { 00:18:30.020 "type": "rebuild", 00:18:30.020 "target": "spare", 00:18:30.020 "progress": { 00:18:30.020 "blocks": 42240, 00:18:30.020 "percent": 22 00:18:30.020 } 00:18:30.020 }, 00:18:30.020 "base_bdevs_list": [ 00:18:30.020 { 00:18:30.020 "name": "spare", 00:18:30.020 "uuid": "a6535f5b-24eb-534e-a7db-960cd0b414a2", 00:18:30.020 "is_configured": true, 00:18:30.020 "data_offset": 2048, 00:18:30.020 "data_size": 63488 00:18:30.020 }, 00:18:30.020 { 00:18:30.020 "name": "BaseBdev2", 00:18:30.020 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:30.020 "is_configured": true, 00:18:30.020 "data_offset": 2048, 00:18:30.020 "data_size": 63488 00:18:30.020 }, 00:18:30.020 { 00:18:30.020 "name": "BaseBdev3", 00:18:30.020 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:30.020 "is_configured": true, 00:18:30.020 "data_offset": 2048, 00:18:30.020 "data_size": 63488 00:18:30.020 }, 00:18:30.020 { 00:18:30.020 "name": "BaseBdev4", 00:18:30.020 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:30.020 "is_configured": true, 00:18:30.020 "data_offset": 2048, 00:18:30.020 "data_size": 63488 00:18:30.020 } 00:18:30.020 ] 00:18:30.020 }' 00:18:30.020 11:28:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.020 11:28:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:30.020 11:28:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.020 11:28:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:30.020 11:28:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:30.956 11:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:30.956 11:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.956 11:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.956 11:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:30.956 11:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:30.956 11:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.956 11:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.956 11:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.956 11:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.956 11:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.956 11:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.956 11:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.956 "name": "raid_bdev1", 00:18:30.956 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:30.956 "strip_size_kb": 64, 00:18:30.956 "state": "online", 00:18:30.956 "raid_level": "raid5f", 00:18:30.956 "superblock": true, 00:18:30.956 "num_base_bdevs": 4, 00:18:30.956 "num_base_bdevs_discovered": 4, 00:18:30.956 "num_base_bdevs_operational": 4, 00:18:30.956 "process": { 00:18:30.956 "type": "rebuild", 00:18:30.956 "target": "spare", 00:18:30.956 "progress": { 00:18:30.956 "blocks": 65280, 00:18:30.956 "percent": 34 00:18:30.956 } 00:18:30.956 }, 00:18:30.956 "base_bdevs_list": [ 00:18:30.956 { 00:18:30.956 "name": "spare", 00:18:30.956 "uuid": "a6535f5b-24eb-534e-a7db-960cd0b414a2", 00:18:30.956 "is_configured": true, 00:18:30.956 "data_offset": 2048, 00:18:30.956 "data_size": 63488 00:18:30.956 }, 00:18:30.956 { 00:18:30.956 "name": "BaseBdev2", 00:18:30.956 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:30.956 "is_configured": true, 00:18:30.956 "data_offset": 2048, 00:18:30.956 "data_size": 63488 00:18:30.956 }, 00:18:30.956 { 00:18:30.956 "name": "BaseBdev3", 00:18:30.956 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:30.956 "is_configured": true, 00:18:30.956 "data_offset": 2048, 00:18:30.956 "data_size": 63488 00:18:30.956 }, 00:18:30.956 { 00:18:30.956 "name": "BaseBdev4", 00:18:30.956 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:30.956 "is_configured": true, 00:18:30.956 "data_offset": 2048, 00:18:30.956 "data_size": 63488 00:18:30.956 } 00:18:30.956 ] 00:18:30.956 }' 00:18:30.956 11:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.956 11:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:30.956 11:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.956 11:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:30.956 11:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:32.336 11:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:32.336 11:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:32.336 11:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.336 11:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:32.336 11:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:32.336 11:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.336 11:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.336 11:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.336 11:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.336 11:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.336 11:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.336 11:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.336 "name": "raid_bdev1", 00:18:32.336 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:32.336 "strip_size_kb": 64, 00:18:32.336 "state": "online", 00:18:32.336 "raid_level": "raid5f", 00:18:32.336 "superblock": true, 00:18:32.336 "num_base_bdevs": 4, 00:18:32.336 "num_base_bdevs_discovered": 4, 00:18:32.336 "num_base_bdevs_operational": 4, 00:18:32.336 "process": { 00:18:32.336 "type": "rebuild", 00:18:32.336 "target": "spare", 00:18:32.336 "progress": { 00:18:32.336 "blocks": 86400, 00:18:32.336 "percent": 45 00:18:32.336 } 00:18:32.336 }, 00:18:32.336 "base_bdevs_list": [ 00:18:32.336 { 00:18:32.336 "name": "spare", 00:18:32.336 "uuid": "a6535f5b-24eb-534e-a7db-960cd0b414a2", 00:18:32.336 "is_configured": true, 00:18:32.336 "data_offset": 2048, 00:18:32.336 "data_size": 63488 00:18:32.336 }, 00:18:32.336 { 00:18:32.336 "name": "BaseBdev2", 00:18:32.336 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:32.336 "is_configured": true, 00:18:32.336 "data_offset": 2048, 00:18:32.336 "data_size": 63488 00:18:32.336 }, 00:18:32.336 { 00:18:32.336 "name": "BaseBdev3", 00:18:32.336 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:32.336 "is_configured": true, 00:18:32.336 "data_offset": 2048, 00:18:32.336 "data_size": 63488 00:18:32.336 }, 00:18:32.336 { 00:18:32.336 "name": "BaseBdev4", 00:18:32.336 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:32.336 "is_configured": true, 00:18:32.336 "data_offset": 2048, 00:18:32.336 "data_size": 63488 00:18:32.336 } 00:18:32.336 ] 00:18:32.336 }' 00:18:32.336 11:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.336 11:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:32.336 11:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.336 11:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:32.336 11:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:33.275 11:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:33.275 11:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:33.275 11:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.275 11:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:33.275 11:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:33.275 11:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.275 11:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.275 11:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.275 11:28:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.275 11:28:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.275 11:28:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.275 11:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.275 "name": "raid_bdev1", 00:18:33.275 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:33.275 "strip_size_kb": 64, 00:18:33.275 "state": "online", 00:18:33.275 "raid_level": "raid5f", 00:18:33.275 "superblock": true, 00:18:33.275 "num_base_bdevs": 4, 00:18:33.275 "num_base_bdevs_discovered": 4, 00:18:33.275 "num_base_bdevs_operational": 4, 00:18:33.275 "process": { 00:18:33.275 "type": "rebuild", 00:18:33.275 "target": "spare", 00:18:33.275 "progress": { 00:18:33.275 "blocks": 107520, 00:18:33.275 "percent": 56 00:18:33.275 } 00:18:33.275 }, 00:18:33.275 "base_bdevs_list": [ 00:18:33.275 { 00:18:33.275 "name": "spare", 00:18:33.275 "uuid": "a6535f5b-24eb-534e-a7db-960cd0b414a2", 00:18:33.275 "is_configured": true, 00:18:33.275 "data_offset": 2048, 00:18:33.275 "data_size": 63488 00:18:33.275 }, 00:18:33.275 { 00:18:33.275 "name": "BaseBdev2", 00:18:33.275 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:33.275 "is_configured": true, 00:18:33.275 "data_offset": 2048, 00:18:33.275 "data_size": 63488 00:18:33.275 }, 00:18:33.275 { 00:18:33.275 "name": "BaseBdev3", 00:18:33.275 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:33.275 "is_configured": true, 00:18:33.275 "data_offset": 2048, 00:18:33.275 "data_size": 63488 00:18:33.275 }, 00:18:33.275 { 00:18:33.275 "name": "BaseBdev4", 00:18:33.275 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:33.275 "is_configured": true, 00:18:33.275 "data_offset": 2048, 00:18:33.275 "data_size": 63488 00:18:33.275 } 00:18:33.275 ] 00:18:33.275 }' 00:18:33.275 11:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.275 11:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:33.275 11:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.275 11:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:33.275 11:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:34.213 11:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:34.213 11:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:34.213 11:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.213 11:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:34.213 11:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:34.213 11:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.213 11:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.213 11:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.213 11:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.213 11:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.213 11:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.472 11:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.472 "name": "raid_bdev1", 00:18:34.472 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:34.472 "strip_size_kb": 64, 00:18:34.472 "state": "online", 00:18:34.472 "raid_level": "raid5f", 00:18:34.472 "superblock": true, 00:18:34.472 "num_base_bdevs": 4, 00:18:34.472 "num_base_bdevs_discovered": 4, 00:18:34.472 "num_base_bdevs_operational": 4, 00:18:34.472 "process": { 00:18:34.472 "type": "rebuild", 00:18:34.472 "target": "spare", 00:18:34.472 "progress": { 00:18:34.472 "blocks": 128640, 00:18:34.472 "percent": 67 00:18:34.472 } 00:18:34.472 }, 00:18:34.472 "base_bdevs_list": [ 00:18:34.472 { 00:18:34.472 "name": "spare", 00:18:34.472 "uuid": "a6535f5b-24eb-534e-a7db-960cd0b414a2", 00:18:34.472 "is_configured": true, 00:18:34.472 "data_offset": 2048, 00:18:34.472 "data_size": 63488 00:18:34.472 }, 00:18:34.472 { 00:18:34.472 "name": "BaseBdev2", 00:18:34.472 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:34.472 "is_configured": true, 00:18:34.472 "data_offset": 2048, 00:18:34.472 "data_size": 63488 00:18:34.472 }, 00:18:34.472 { 00:18:34.472 "name": "BaseBdev3", 00:18:34.472 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:34.472 "is_configured": true, 00:18:34.472 "data_offset": 2048, 00:18:34.472 "data_size": 63488 00:18:34.472 }, 00:18:34.472 { 00:18:34.472 "name": "BaseBdev4", 00:18:34.472 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:34.472 "is_configured": true, 00:18:34.472 "data_offset": 2048, 00:18:34.472 "data_size": 63488 00:18:34.472 } 00:18:34.472 ] 00:18:34.472 }' 00:18:34.472 11:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.472 11:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:34.472 11:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.472 11:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:34.472 11:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:35.409 11:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:35.409 11:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:35.409 11:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.409 11:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:35.409 11:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:35.409 11:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.409 11:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.409 11:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.409 11:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.409 11:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.409 11:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.409 11:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.409 "name": "raid_bdev1", 00:18:35.409 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:35.409 "strip_size_kb": 64, 00:18:35.409 "state": "online", 00:18:35.409 "raid_level": "raid5f", 00:18:35.409 "superblock": true, 00:18:35.409 "num_base_bdevs": 4, 00:18:35.409 "num_base_bdevs_discovered": 4, 00:18:35.409 "num_base_bdevs_operational": 4, 00:18:35.409 "process": { 00:18:35.409 "type": "rebuild", 00:18:35.409 "target": "spare", 00:18:35.409 "progress": { 00:18:35.409 "blocks": 151680, 00:18:35.409 "percent": 79 00:18:35.409 } 00:18:35.409 }, 00:18:35.409 "base_bdevs_list": [ 00:18:35.409 { 00:18:35.409 "name": "spare", 00:18:35.409 "uuid": "a6535f5b-24eb-534e-a7db-960cd0b414a2", 00:18:35.409 "is_configured": true, 00:18:35.409 "data_offset": 2048, 00:18:35.409 "data_size": 63488 00:18:35.409 }, 00:18:35.409 { 00:18:35.409 "name": "BaseBdev2", 00:18:35.409 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:35.409 "is_configured": true, 00:18:35.409 "data_offset": 2048, 00:18:35.409 "data_size": 63488 00:18:35.409 }, 00:18:35.409 { 00:18:35.409 "name": "BaseBdev3", 00:18:35.409 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:35.409 "is_configured": true, 00:18:35.409 "data_offset": 2048, 00:18:35.409 "data_size": 63488 00:18:35.409 }, 00:18:35.409 { 00:18:35.409 "name": "BaseBdev4", 00:18:35.409 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:35.409 "is_configured": true, 00:18:35.409 "data_offset": 2048, 00:18:35.409 "data_size": 63488 00:18:35.409 } 00:18:35.409 ] 00:18:35.409 }' 00:18:35.409 11:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.409 11:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:35.409 11:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.669 11:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.669 11:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:36.611 11:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:36.611 11:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.611 11:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.611 11:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:36.611 11:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:36.611 11:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.611 11:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.611 11:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.611 11:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.611 11:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.611 11:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.611 11:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.611 "name": "raid_bdev1", 00:18:36.611 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:36.611 "strip_size_kb": 64, 00:18:36.611 "state": "online", 00:18:36.611 "raid_level": "raid5f", 00:18:36.611 "superblock": true, 00:18:36.611 "num_base_bdevs": 4, 00:18:36.611 "num_base_bdevs_discovered": 4, 00:18:36.611 "num_base_bdevs_operational": 4, 00:18:36.611 "process": { 00:18:36.611 "type": "rebuild", 00:18:36.611 "target": "spare", 00:18:36.611 "progress": { 00:18:36.611 "blocks": 172800, 00:18:36.611 "percent": 90 00:18:36.611 } 00:18:36.611 }, 00:18:36.611 "base_bdevs_list": [ 00:18:36.611 { 00:18:36.611 "name": "spare", 00:18:36.611 "uuid": "a6535f5b-24eb-534e-a7db-960cd0b414a2", 00:18:36.611 "is_configured": true, 00:18:36.611 "data_offset": 2048, 00:18:36.611 "data_size": 63488 00:18:36.611 }, 00:18:36.611 { 00:18:36.611 "name": "BaseBdev2", 00:18:36.611 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:36.611 "is_configured": true, 00:18:36.611 "data_offset": 2048, 00:18:36.611 "data_size": 63488 00:18:36.611 }, 00:18:36.611 { 00:18:36.611 "name": "BaseBdev3", 00:18:36.611 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:36.612 "is_configured": true, 00:18:36.612 "data_offset": 2048, 00:18:36.612 "data_size": 63488 00:18:36.612 }, 00:18:36.612 { 00:18:36.612 "name": "BaseBdev4", 00:18:36.612 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:36.612 "is_configured": true, 00:18:36.612 "data_offset": 2048, 00:18:36.612 "data_size": 63488 00:18:36.612 } 00:18:36.612 ] 00:18:36.612 }' 00:18:36.612 11:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.612 11:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:36.612 11:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.612 11:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:36.612 11:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:37.552 [2024-11-20 11:28:20.521620] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:37.552 [2024-11-20 11:28:20.521807] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:37.552 [2024-11-20 11:28:20.522007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.812 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:37.812 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.812 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.812 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.812 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.812 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.812 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.812 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.812 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.812 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.812 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.812 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.812 "name": "raid_bdev1", 00:18:37.812 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:37.812 "strip_size_kb": 64, 00:18:37.812 "state": "online", 00:18:37.812 "raid_level": "raid5f", 00:18:37.812 "superblock": true, 00:18:37.812 "num_base_bdevs": 4, 00:18:37.812 "num_base_bdevs_discovered": 4, 00:18:37.812 "num_base_bdevs_operational": 4, 00:18:37.812 "base_bdevs_list": [ 00:18:37.812 { 00:18:37.812 "name": "spare", 00:18:37.812 "uuid": "a6535f5b-24eb-534e-a7db-960cd0b414a2", 00:18:37.812 "is_configured": true, 00:18:37.812 "data_offset": 2048, 00:18:37.812 "data_size": 63488 00:18:37.812 }, 00:18:37.812 { 00:18:37.812 "name": "BaseBdev2", 00:18:37.812 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:37.812 "is_configured": true, 00:18:37.812 "data_offset": 2048, 00:18:37.812 "data_size": 63488 00:18:37.812 }, 00:18:37.812 { 00:18:37.812 "name": "BaseBdev3", 00:18:37.812 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:37.812 "is_configured": true, 00:18:37.812 "data_offset": 2048, 00:18:37.812 "data_size": 63488 00:18:37.812 }, 00:18:37.812 { 00:18:37.812 "name": "BaseBdev4", 00:18:37.812 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:37.812 "is_configured": true, 00:18:37.812 "data_offset": 2048, 00:18:37.812 "data_size": 63488 00:18:37.812 } 00:18:37.812 ] 00:18:37.812 }' 00:18:37.812 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.812 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:37.812 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.812 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:37.812 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:37.812 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:37.812 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.812 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:37.812 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:37.812 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.812 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.812 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.812 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.813 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.813 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.813 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.813 "name": "raid_bdev1", 00:18:37.813 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:37.813 "strip_size_kb": 64, 00:18:37.813 "state": "online", 00:18:37.813 "raid_level": "raid5f", 00:18:37.813 "superblock": true, 00:18:37.813 "num_base_bdevs": 4, 00:18:37.813 "num_base_bdevs_discovered": 4, 00:18:37.813 "num_base_bdevs_operational": 4, 00:18:37.813 "base_bdevs_list": [ 00:18:37.813 { 00:18:37.813 "name": "spare", 00:18:37.813 "uuid": "a6535f5b-24eb-534e-a7db-960cd0b414a2", 00:18:37.813 "is_configured": true, 00:18:37.813 "data_offset": 2048, 00:18:37.813 "data_size": 63488 00:18:37.813 }, 00:18:37.813 { 00:18:37.813 "name": "BaseBdev2", 00:18:37.813 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:37.813 "is_configured": true, 00:18:37.813 "data_offset": 2048, 00:18:37.813 "data_size": 63488 00:18:37.813 }, 00:18:37.813 { 00:18:37.813 "name": "BaseBdev3", 00:18:37.813 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:37.813 "is_configured": true, 00:18:37.813 "data_offset": 2048, 00:18:37.813 "data_size": 63488 00:18:37.813 }, 00:18:37.813 { 00:18:37.813 "name": "BaseBdev4", 00:18:37.813 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:37.813 "is_configured": true, 00:18:37.813 "data_offset": 2048, 00:18:37.813 "data_size": 63488 00:18:37.813 } 00:18:37.813 ] 00:18:37.813 }' 00:18:37.813 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.073 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:38.073 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.073 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:38.073 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:38.073 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.073 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.073 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:38.073 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:38.073 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:38.073 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.073 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.073 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.073 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.073 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.073 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.073 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.073 11:28:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.073 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.073 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.073 "name": "raid_bdev1", 00:18:38.073 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:38.073 "strip_size_kb": 64, 00:18:38.073 "state": "online", 00:18:38.073 "raid_level": "raid5f", 00:18:38.073 "superblock": true, 00:18:38.073 "num_base_bdevs": 4, 00:18:38.073 "num_base_bdevs_discovered": 4, 00:18:38.073 "num_base_bdevs_operational": 4, 00:18:38.073 "base_bdevs_list": [ 00:18:38.073 { 00:18:38.073 "name": "spare", 00:18:38.073 "uuid": "a6535f5b-24eb-534e-a7db-960cd0b414a2", 00:18:38.073 "is_configured": true, 00:18:38.073 "data_offset": 2048, 00:18:38.073 "data_size": 63488 00:18:38.073 }, 00:18:38.073 { 00:18:38.073 "name": "BaseBdev2", 00:18:38.073 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:38.073 "is_configured": true, 00:18:38.073 "data_offset": 2048, 00:18:38.073 "data_size": 63488 00:18:38.073 }, 00:18:38.073 { 00:18:38.073 "name": "BaseBdev3", 00:18:38.073 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:38.073 "is_configured": true, 00:18:38.073 "data_offset": 2048, 00:18:38.073 "data_size": 63488 00:18:38.073 }, 00:18:38.073 { 00:18:38.073 "name": "BaseBdev4", 00:18:38.073 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:38.073 "is_configured": true, 00:18:38.073 "data_offset": 2048, 00:18:38.073 "data_size": 63488 00:18:38.073 } 00:18:38.073 ] 00:18:38.073 }' 00:18:38.073 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.073 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.334 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:38.334 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.334 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.334 [2024-11-20 11:28:21.441595] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:38.334 [2024-11-20 11:28:21.441695] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:38.334 [2024-11-20 11:28:21.441805] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.334 [2024-11-20 11:28:21.441933] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.334 [2024-11-20 11:28:21.442002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:38.334 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.595 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:38.595 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.595 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.595 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.595 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.595 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:38.595 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:38.595 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:38.595 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:38.595 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:38.595 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:38.595 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:38.595 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:38.595 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:38.595 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:38.595 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:38.595 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:38.595 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:38.595 /dev/nbd0 00:18:38.856 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:38.856 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:38.856 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:38.856 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:38.856 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:38.856 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:38.856 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:38.856 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:38.856 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:38.856 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:38.856 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:38.856 1+0 records in 00:18:38.856 1+0 records out 00:18:38.856 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253223 s, 16.2 MB/s 00:18:38.856 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:38.856 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:38.856 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:38.856 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:38.856 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:38.856 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:38.856 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:38.856 11:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:39.115 /dev/nbd1 00:18:39.115 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:39.115 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:39.115 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:39.116 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:39.116 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:39.116 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:39.116 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:39.116 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:39.116 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:39.116 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:39.116 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:39.116 1+0 records in 00:18:39.116 1+0 records out 00:18:39.116 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469068 s, 8.7 MB/s 00:18:39.116 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:39.116 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:39.116 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:39.116 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:39.116 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:39.116 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:39.116 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:39.116 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:39.376 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:39.376 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:39.376 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:39.376 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:39.376 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:39.376 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:39.376 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:39.376 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:39.376 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:39.376 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:39.376 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:39.376 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:39.376 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:39.376 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:39.376 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:39.376 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:39.376 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:39.637 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:39.637 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:39.637 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:39.637 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:39.637 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:39.637 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:39.637 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:39.637 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:39.637 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:39.637 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:39.637 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.637 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.637 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.637 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:39.637 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.637 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.637 [2024-11-20 11:28:22.699338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:39.637 [2024-11-20 11:28:22.699424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.637 [2024-11-20 11:28:22.699461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:39.637 [2024-11-20 11:28:22.699491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.637 [2024-11-20 11:28:22.702252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.637 [2024-11-20 11:28:22.702296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:39.637 [2024-11-20 11:28:22.702407] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:39.637 [2024-11-20 11:28:22.702483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:39.637 [2024-11-20 11:28:22.702645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:39.637 [2024-11-20 11:28:22.702749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:39.637 [2024-11-20 11:28:22.702852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:39.637 spare 00:18:39.637 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.637 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:39.637 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.637 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.897 [2024-11-20 11:28:22.802788] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:39.897 [2024-11-20 11:28:22.802855] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:39.897 [2024-11-20 11:28:22.803251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:18:39.897 [2024-11-20 11:28:22.811006] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:39.897 [2024-11-20 11:28:22.811041] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:39.897 [2024-11-20 11:28:22.811300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.897 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.897 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:39.897 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.897 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.897 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:39.897 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:39.897 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:39.897 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.897 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.897 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.897 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.897 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.897 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.897 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.897 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.897 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.897 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.897 "name": "raid_bdev1", 00:18:39.897 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:39.897 "strip_size_kb": 64, 00:18:39.897 "state": "online", 00:18:39.897 "raid_level": "raid5f", 00:18:39.897 "superblock": true, 00:18:39.897 "num_base_bdevs": 4, 00:18:39.897 "num_base_bdevs_discovered": 4, 00:18:39.897 "num_base_bdevs_operational": 4, 00:18:39.897 "base_bdevs_list": [ 00:18:39.897 { 00:18:39.897 "name": "spare", 00:18:39.897 "uuid": "a6535f5b-24eb-534e-a7db-960cd0b414a2", 00:18:39.897 "is_configured": true, 00:18:39.897 "data_offset": 2048, 00:18:39.897 "data_size": 63488 00:18:39.897 }, 00:18:39.897 { 00:18:39.897 "name": "BaseBdev2", 00:18:39.897 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:39.897 "is_configured": true, 00:18:39.897 "data_offset": 2048, 00:18:39.897 "data_size": 63488 00:18:39.897 }, 00:18:39.897 { 00:18:39.897 "name": "BaseBdev3", 00:18:39.897 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:39.897 "is_configured": true, 00:18:39.897 "data_offset": 2048, 00:18:39.897 "data_size": 63488 00:18:39.897 }, 00:18:39.897 { 00:18:39.897 "name": "BaseBdev4", 00:18:39.897 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:39.897 "is_configured": true, 00:18:39.897 "data_offset": 2048, 00:18:39.897 "data_size": 63488 00:18:39.897 } 00:18:39.897 ] 00:18:39.897 }' 00:18:39.897 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.897 11:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.468 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:40.468 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.468 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:40.468 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.469 "name": "raid_bdev1", 00:18:40.469 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:40.469 "strip_size_kb": 64, 00:18:40.469 "state": "online", 00:18:40.469 "raid_level": "raid5f", 00:18:40.469 "superblock": true, 00:18:40.469 "num_base_bdevs": 4, 00:18:40.469 "num_base_bdevs_discovered": 4, 00:18:40.469 "num_base_bdevs_operational": 4, 00:18:40.469 "base_bdevs_list": [ 00:18:40.469 { 00:18:40.469 "name": "spare", 00:18:40.469 "uuid": "a6535f5b-24eb-534e-a7db-960cd0b414a2", 00:18:40.469 "is_configured": true, 00:18:40.469 "data_offset": 2048, 00:18:40.469 "data_size": 63488 00:18:40.469 }, 00:18:40.469 { 00:18:40.469 "name": "BaseBdev2", 00:18:40.469 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:40.469 "is_configured": true, 00:18:40.469 "data_offset": 2048, 00:18:40.469 "data_size": 63488 00:18:40.469 }, 00:18:40.469 { 00:18:40.469 "name": "BaseBdev3", 00:18:40.469 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:40.469 "is_configured": true, 00:18:40.469 "data_offset": 2048, 00:18:40.469 "data_size": 63488 00:18:40.469 }, 00:18:40.469 { 00:18:40.469 "name": "BaseBdev4", 00:18:40.469 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:40.469 "is_configured": true, 00:18:40.469 "data_offset": 2048, 00:18:40.469 "data_size": 63488 00:18:40.469 } 00:18:40.469 ] 00:18:40.469 }' 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.469 [2024-11-20 11:28:23.451605] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.469 "name": "raid_bdev1", 00:18:40.469 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:40.469 "strip_size_kb": 64, 00:18:40.469 "state": "online", 00:18:40.469 "raid_level": "raid5f", 00:18:40.469 "superblock": true, 00:18:40.469 "num_base_bdevs": 4, 00:18:40.469 "num_base_bdevs_discovered": 3, 00:18:40.469 "num_base_bdevs_operational": 3, 00:18:40.469 "base_bdevs_list": [ 00:18:40.469 { 00:18:40.469 "name": null, 00:18:40.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.469 "is_configured": false, 00:18:40.469 "data_offset": 0, 00:18:40.469 "data_size": 63488 00:18:40.469 }, 00:18:40.469 { 00:18:40.469 "name": "BaseBdev2", 00:18:40.469 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:40.469 "is_configured": true, 00:18:40.469 "data_offset": 2048, 00:18:40.469 "data_size": 63488 00:18:40.469 }, 00:18:40.469 { 00:18:40.469 "name": "BaseBdev3", 00:18:40.469 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:40.469 "is_configured": true, 00:18:40.469 "data_offset": 2048, 00:18:40.469 "data_size": 63488 00:18:40.469 }, 00:18:40.469 { 00:18:40.469 "name": "BaseBdev4", 00:18:40.469 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:40.469 "is_configured": true, 00:18:40.469 "data_offset": 2048, 00:18:40.469 "data_size": 63488 00:18:40.469 } 00:18:40.469 ] 00:18:40.469 }' 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.469 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.785 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:40.785 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.785 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.785 [2024-11-20 11:28:23.870978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:40.785 [2024-11-20 11:28:23.871185] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:40.785 [2024-11-20 11:28:23.871214] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:40.785 [2024-11-20 11:28:23.871250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:41.048 [2024-11-20 11:28:23.886659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:18:41.048 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.048 11:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:41.048 [2024-11-20 11:28:23.896893] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:41.987 11:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.987 11:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.987 11:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.987 11:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.987 11:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.987 11:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.987 11:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.987 11:28:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.987 11:28:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.987 11:28:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.987 11:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.987 "name": "raid_bdev1", 00:18:41.987 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:41.987 "strip_size_kb": 64, 00:18:41.987 "state": "online", 00:18:41.987 "raid_level": "raid5f", 00:18:41.987 "superblock": true, 00:18:41.987 "num_base_bdevs": 4, 00:18:41.987 "num_base_bdevs_discovered": 4, 00:18:41.987 "num_base_bdevs_operational": 4, 00:18:41.987 "process": { 00:18:41.987 "type": "rebuild", 00:18:41.987 "target": "spare", 00:18:41.987 "progress": { 00:18:41.987 "blocks": 19200, 00:18:41.987 "percent": 10 00:18:41.987 } 00:18:41.987 }, 00:18:41.987 "base_bdevs_list": [ 00:18:41.987 { 00:18:41.987 "name": "spare", 00:18:41.987 "uuid": "a6535f5b-24eb-534e-a7db-960cd0b414a2", 00:18:41.987 "is_configured": true, 00:18:41.987 "data_offset": 2048, 00:18:41.987 "data_size": 63488 00:18:41.987 }, 00:18:41.987 { 00:18:41.987 "name": "BaseBdev2", 00:18:41.987 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:41.987 "is_configured": true, 00:18:41.987 "data_offset": 2048, 00:18:41.987 "data_size": 63488 00:18:41.987 }, 00:18:41.987 { 00:18:41.987 "name": "BaseBdev3", 00:18:41.987 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:41.987 "is_configured": true, 00:18:41.987 "data_offset": 2048, 00:18:41.987 "data_size": 63488 00:18:41.987 }, 00:18:41.987 { 00:18:41.987 "name": "BaseBdev4", 00:18:41.987 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:41.987 "is_configured": true, 00:18:41.987 "data_offset": 2048, 00:18:41.987 "data_size": 63488 00:18:41.987 } 00:18:41.987 ] 00:18:41.987 }' 00:18:41.987 11:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.987 11:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:41.987 11:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.987 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.987 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:41.987 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.987 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.987 [2024-11-20 11:28:25.056192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:42.248 [2024-11-20 11:28:25.104230] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:42.248 [2024-11-20 11:28:25.104317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.248 [2024-11-20 11:28:25.104338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:42.248 [2024-11-20 11:28:25.104348] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:42.248 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.248 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:42.248 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.248 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.248 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:42.248 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:42.248 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:42.248 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.248 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.248 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.248 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.248 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.248 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.248 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.248 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.248 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.248 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.248 "name": "raid_bdev1", 00:18:42.248 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:42.248 "strip_size_kb": 64, 00:18:42.248 "state": "online", 00:18:42.248 "raid_level": "raid5f", 00:18:42.248 "superblock": true, 00:18:42.248 "num_base_bdevs": 4, 00:18:42.248 "num_base_bdevs_discovered": 3, 00:18:42.248 "num_base_bdevs_operational": 3, 00:18:42.248 "base_bdevs_list": [ 00:18:42.248 { 00:18:42.248 "name": null, 00:18:42.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.248 "is_configured": false, 00:18:42.248 "data_offset": 0, 00:18:42.248 "data_size": 63488 00:18:42.248 }, 00:18:42.248 { 00:18:42.248 "name": "BaseBdev2", 00:18:42.248 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:42.248 "is_configured": true, 00:18:42.248 "data_offset": 2048, 00:18:42.248 "data_size": 63488 00:18:42.248 }, 00:18:42.248 { 00:18:42.248 "name": "BaseBdev3", 00:18:42.248 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:42.248 "is_configured": true, 00:18:42.248 "data_offset": 2048, 00:18:42.248 "data_size": 63488 00:18:42.248 }, 00:18:42.248 { 00:18:42.248 "name": "BaseBdev4", 00:18:42.248 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:42.248 "is_configured": true, 00:18:42.248 "data_offset": 2048, 00:18:42.248 "data_size": 63488 00:18:42.248 } 00:18:42.248 ] 00:18:42.248 }' 00:18:42.248 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.248 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.507 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:42.507 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.507 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.507 [2024-11-20 11:28:25.581444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:42.507 [2024-11-20 11:28:25.581528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.507 [2024-11-20 11:28:25.581565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:42.507 [2024-11-20 11:28:25.581579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.507 [2024-11-20 11:28:25.582143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.507 [2024-11-20 11:28:25.582182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:42.507 [2024-11-20 11:28:25.582293] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:42.507 [2024-11-20 11:28:25.582317] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:42.507 [2024-11-20 11:28:25.582329] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:42.507 [2024-11-20 11:28:25.582365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:42.508 [2024-11-20 11:28:25.598864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:18:42.508 spare 00:18:42.508 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.508 11:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:42.508 [2024-11-20 11:28:25.609455] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:43.887 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.887 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.887 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.887 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.887 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.887 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.887 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.887 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.887 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.887 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.887 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.887 "name": "raid_bdev1", 00:18:43.887 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:43.887 "strip_size_kb": 64, 00:18:43.887 "state": "online", 00:18:43.887 "raid_level": "raid5f", 00:18:43.887 "superblock": true, 00:18:43.887 "num_base_bdevs": 4, 00:18:43.887 "num_base_bdevs_discovered": 4, 00:18:43.887 "num_base_bdevs_operational": 4, 00:18:43.887 "process": { 00:18:43.888 "type": "rebuild", 00:18:43.888 "target": "spare", 00:18:43.888 "progress": { 00:18:43.888 "blocks": 17280, 00:18:43.888 "percent": 9 00:18:43.888 } 00:18:43.888 }, 00:18:43.888 "base_bdevs_list": [ 00:18:43.888 { 00:18:43.888 "name": "spare", 00:18:43.888 "uuid": "a6535f5b-24eb-534e-a7db-960cd0b414a2", 00:18:43.888 "is_configured": true, 00:18:43.888 "data_offset": 2048, 00:18:43.888 "data_size": 63488 00:18:43.888 }, 00:18:43.888 { 00:18:43.888 "name": "BaseBdev2", 00:18:43.888 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:43.888 "is_configured": true, 00:18:43.888 "data_offset": 2048, 00:18:43.888 "data_size": 63488 00:18:43.888 }, 00:18:43.888 { 00:18:43.888 "name": "BaseBdev3", 00:18:43.888 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:43.888 "is_configured": true, 00:18:43.888 "data_offset": 2048, 00:18:43.888 "data_size": 63488 00:18:43.888 }, 00:18:43.888 { 00:18:43.888 "name": "BaseBdev4", 00:18:43.888 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:43.888 "is_configured": true, 00:18:43.888 "data_offset": 2048, 00:18:43.888 "data_size": 63488 00:18:43.888 } 00:18:43.888 ] 00:18:43.888 }' 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.888 [2024-11-20 11:28:26.768628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:43.888 [2024-11-20 11:28:26.818565] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:43.888 [2024-11-20 11:28:26.818641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.888 [2024-11-20 11:28:26.818664] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:43.888 [2024-11-20 11:28:26.818672] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.888 "name": "raid_bdev1", 00:18:43.888 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:43.888 "strip_size_kb": 64, 00:18:43.888 "state": "online", 00:18:43.888 "raid_level": "raid5f", 00:18:43.888 "superblock": true, 00:18:43.888 "num_base_bdevs": 4, 00:18:43.888 "num_base_bdevs_discovered": 3, 00:18:43.888 "num_base_bdevs_operational": 3, 00:18:43.888 "base_bdevs_list": [ 00:18:43.888 { 00:18:43.888 "name": null, 00:18:43.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.888 "is_configured": false, 00:18:43.888 "data_offset": 0, 00:18:43.888 "data_size": 63488 00:18:43.888 }, 00:18:43.888 { 00:18:43.888 "name": "BaseBdev2", 00:18:43.888 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:43.888 "is_configured": true, 00:18:43.888 "data_offset": 2048, 00:18:43.888 "data_size": 63488 00:18:43.888 }, 00:18:43.888 { 00:18:43.888 "name": "BaseBdev3", 00:18:43.888 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:43.888 "is_configured": true, 00:18:43.888 "data_offset": 2048, 00:18:43.888 "data_size": 63488 00:18:43.888 }, 00:18:43.888 { 00:18:43.888 "name": "BaseBdev4", 00:18:43.888 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:43.888 "is_configured": true, 00:18:43.888 "data_offset": 2048, 00:18:43.888 "data_size": 63488 00:18:43.888 } 00:18:43.888 ] 00:18:43.888 }' 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.888 11:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.456 11:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:44.456 11:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.456 11:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:44.456 11:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:44.456 11:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.456 11:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.456 11:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.456 11:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.456 11:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.456 11:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.456 11:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.456 "name": "raid_bdev1", 00:18:44.456 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:44.456 "strip_size_kb": 64, 00:18:44.456 "state": "online", 00:18:44.456 "raid_level": "raid5f", 00:18:44.456 "superblock": true, 00:18:44.456 "num_base_bdevs": 4, 00:18:44.456 "num_base_bdevs_discovered": 3, 00:18:44.456 "num_base_bdevs_operational": 3, 00:18:44.456 "base_bdevs_list": [ 00:18:44.456 { 00:18:44.456 "name": null, 00:18:44.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.456 "is_configured": false, 00:18:44.456 "data_offset": 0, 00:18:44.456 "data_size": 63488 00:18:44.456 }, 00:18:44.456 { 00:18:44.456 "name": "BaseBdev2", 00:18:44.456 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:44.456 "is_configured": true, 00:18:44.456 "data_offset": 2048, 00:18:44.456 "data_size": 63488 00:18:44.456 }, 00:18:44.456 { 00:18:44.456 "name": "BaseBdev3", 00:18:44.456 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:44.456 "is_configured": true, 00:18:44.456 "data_offset": 2048, 00:18:44.456 "data_size": 63488 00:18:44.456 }, 00:18:44.456 { 00:18:44.456 "name": "BaseBdev4", 00:18:44.456 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:44.456 "is_configured": true, 00:18:44.456 "data_offset": 2048, 00:18:44.456 "data_size": 63488 00:18:44.456 } 00:18:44.456 ] 00:18:44.456 }' 00:18:44.456 11:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.456 11:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:44.456 11:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.456 11:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:44.456 11:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:44.456 11:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.456 11:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.456 11:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.456 11:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:44.456 11:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.456 11:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.456 [2024-11-20 11:28:27.490118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:44.456 [2024-11-20 11:28:27.490193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.456 [2024-11-20 11:28:27.490221] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:18:44.456 [2024-11-20 11:28:27.490233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.456 [2024-11-20 11:28:27.490799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.456 [2024-11-20 11:28:27.490822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:44.456 [2024-11-20 11:28:27.490919] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:44.456 [2024-11-20 11:28:27.490936] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:44.456 [2024-11-20 11:28:27.490950] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:44.456 [2024-11-20 11:28:27.490963] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:44.456 BaseBdev1 00:18:44.456 11:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.456 11:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:45.415 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:45.415 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.415 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.415 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:45.415 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:45.415 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:45.415 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.415 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.415 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.415 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.415 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.415 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.415 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.415 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.718 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.718 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.718 "name": "raid_bdev1", 00:18:45.718 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:45.718 "strip_size_kb": 64, 00:18:45.718 "state": "online", 00:18:45.718 "raid_level": "raid5f", 00:18:45.718 "superblock": true, 00:18:45.718 "num_base_bdevs": 4, 00:18:45.718 "num_base_bdevs_discovered": 3, 00:18:45.718 "num_base_bdevs_operational": 3, 00:18:45.718 "base_bdevs_list": [ 00:18:45.718 { 00:18:45.718 "name": null, 00:18:45.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.718 "is_configured": false, 00:18:45.718 "data_offset": 0, 00:18:45.718 "data_size": 63488 00:18:45.718 }, 00:18:45.718 { 00:18:45.718 "name": "BaseBdev2", 00:18:45.718 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:45.718 "is_configured": true, 00:18:45.718 "data_offset": 2048, 00:18:45.718 "data_size": 63488 00:18:45.718 }, 00:18:45.718 { 00:18:45.718 "name": "BaseBdev3", 00:18:45.718 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:45.718 "is_configured": true, 00:18:45.718 "data_offset": 2048, 00:18:45.718 "data_size": 63488 00:18:45.718 }, 00:18:45.718 { 00:18:45.718 "name": "BaseBdev4", 00:18:45.718 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:45.718 "is_configured": true, 00:18:45.718 "data_offset": 2048, 00:18:45.718 "data_size": 63488 00:18:45.718 } 00:18:45.718 ] 00:18:45.718 }' 00:18:45.718 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.718 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.979 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:45.979 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.979 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:45.979 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:45.979 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.979 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.979 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.979 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.979 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.979 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.979 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.979 "name": "raid_bdev1", 00:18:45.979 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:45.979 "strip_size_kb": 64, 00:18:45.979 "state": "online", 00:18:45.979 "raid_level": "raid5f", 00:18:45.979 "superblock": true, 00:18:45.979 "num_base_bdevs": 4, 00:18:45.979 "num_base_bdevs_discovered": 3, 00:18:45.979 "num_base_bdevs_operational": 3, 00:18:45.979 "base_bdevs_list": [ 00:18:45.979 { 00:18:45.979 "name": null, 00:18:45.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.979 "is_configured": false, 00:18:45.979 "data_offset": 0, 00:18:45.979 "data_size": 63488 00:18:45.979 }, 00:18:45.979 { 00:18:45.979 "name": "BaseBdev2", 00:18:45.979 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:45.979 "is_configured": true, 00:18:45.979 "data_offset": 2048, 00:18:45.979 "data_size": 63488 00:18:45.979 }, 00:18:45.979 { 00:18:45.979 "name": "BaseBdev3", 00:18:45.979 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:45.979 "is_configured": true, 00:18:45.979 "data_offset": 2048, 00:18:45.979 "data_size": 63488 00:18:45.979 }, 00:18:45.979 { 00:18:45.979 "name": "BaseBdev4", 00:18:45.979 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:45.979 "is_configured": true, 00:18:45.979 "data_offset": 2048, 00:18:45.979 "data_size": 63488 00:18:45.979 } 00:18:45.979 ] 00:18:45.979 }' 00:18:45.979 11:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.979 11:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:45.979 11:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.979 11:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:45.979 11:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:45.979 11:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:18:45.979 11:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:45.979 11:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:45.979 11:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.979 11:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:45.979 11:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.979 11:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:45.979 11:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.979 11:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.979 [2024-11-20 11:28:29.063568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:45.979 [2024-11-20 11:28:29.063759] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:45.979 [2024-11-20 11:28:29.063785] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:45.979 request: 00:18:45.979 { 00:18:45.979 "base_bdev": "BaseBdev1", 00:18:45.979 "raid_bdev": "raid_bdev1", 00:18:45.979 "method": "bdev_raid_add_base_bdev", 00:18:45.979 "req_id": 1 00:18:45.979 } 00:18:45.979 Got JSON-RPC error response 00:18:45.979 response: 00:18:45.979 { 00:18:45.979 "code": -22, 00:18:45.979 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:45.979 } 00:18:45.979 11:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:45.979 11:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:18:45.979 11:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:45.979 11:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:45.979 11:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:45.979 11:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:47.358 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:47.358 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.358 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.358 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:47.358 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:47.358 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:47.358 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.358 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.358 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.358 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.358 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.358 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.358 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.358 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.358 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.358 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.358 "name": "raid_bdev1", 00:18:47.358 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:47.358 "strip_size_kb": 64, 00:18:47.358 "state": "online", 00:18:47.358 "raid_level": "raid5f", 00:18:47.358 "superblock": true, 00:18:47.358 "num_base_bdevs": 4, 00:18:47.358 "num_base_bdevs_discovered": 3, 00:18:47.358 "num_base_bdevs_operational": 3, 00:18:47.358 "base_bdevs_list": [ 00:18:47.358 { 00:18:47.358 "name": null, 00:18:47.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.358 "is_configured": false, 00:18:47.358 "data_offset": 0, 00:18:47.359 "data_size": 63488 00:18:47.359 }, 00:18:47.359 { 00:18:47.359 "name": "BaseBdev2", 00:18:47.359 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:47.359 "is_configured": true, 00:18:47.359 "data_offset": 2048, 00:18:47.359 "data_size": 63488 00:18:47.359 }, 00:18:47.359 { 00:18:47.359 "name": "BaseBdev3", 00:18:47.359 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:47.359 "is_configured": true, 00:18:47.359 "data_offset": 2048, 00:18:47.359 "data_size": 63488 00:18:47.359 }, 00:18:47.359 { 00:18:47.359 "name": "BaseBdev4", 00:18:47.359 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:47.359 "is_configured": true, 00:18:47.359 "data_offset": 2048, 00:18:47.359 "data_size": 63488 00:18:47.359 } 00:18:47.359 ] 00:18:47.359 }' 00:18:47.359 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.359 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.619 "name": "raid_bdev1", 00:18:47.619 "uuid": "2c110f6a-7a7f-4e67-9dae-dc80b85b4578", 00:18:47.619 "strip_size_kb": 64, 00:18:47.619 "state": "online", 00:18:47.619 "raid_level": "raid5f", 00:18:47.619 "superblock": true, 00:18:47.619 "num_base_bdevs": 4, 00:18:47.619 "num_base_bdevs_discovered": 3, 00:18:47.619 "num_base_bdevs_operational": 3, 00:18:47.619 "base_bdevs_list": [ 00:18:47.619 { 00:18:47.619 "name": null, 00:18:47.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.619 "is_configured": false, 00:18:47.619 "data_offset": 0, 00:18:47.619 "data_size": 63488 00:18:47.619 }, 00:18:47.619 { 00:18:47.619 "name": "BaseBdev2", 00:18:47.619 "uuid": "12cb130a-fbc8-5fc3-81db-d7e30f517d8c", 00:18:47.619 "is_configured": true, 00:18:47.619 "data_offset": 2048, 00:18:47.619 "data_size": 63488 00:18:47.619 }, 00:18:47.619 { 00:18:47.619 "name": "BaseBdev3", 00:18:47.619 "uuid": "560b6a0b-46c5-5919-8cc1-98716a4a742c", 00:18:47.619 "is_configured": true, 00:18:47.619 "data_offset": 2048, 00:18:47.619 "data_size": 63488 00:18:47.619 }, 00:18:47.619 { 00:18:47.619 "name": "BaseBdev4", 00:18:47.619 "uuid": "e0d6490b-f0cd-549d-8170-987c80c98a03", 00:18:47.619 "is_configured": true, 00:18:47.619 "data_offset": 2048, 00:18:47.619 "data_size": 63488 00:18:47.619 } 00:18:47.619 ] 00:18:47.619 }' 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85340 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85340 ']' 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85340 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85340 00:18:47.619 killing process with pid 85340 00:18:47.619 Received shutdown signal, test time was about 60.000000 seconds 00:18:47.619 00:18:47.619 Latency(us) 00:18:47.619 [2024-11-20T11:28:30.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.619 [2024-11-20T11:28:30.735Z] =================================================================================================================== 00:18:47.619 [2024-11-20T11:28:30.735Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85340' 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85340 00:18:47.619 [2024-11-20 11:28:30.641478] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:47.619 [2024-11-20 11:28:30.641635] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:47.619 11:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85340 00:18:47.619 [2024-11-20 11:28:30.641717] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:47.619 [2024-11-20 11:28:30.641730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:48.193 [2024-11-20 11:28:31.149870] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:49.188 11:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:49.188 00:18:49.188 real 0m26.967s 00:18:49.188 user 0m33.908s 00:18:49.188 sys 0m2.897s 00:18:49.188 11:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:49.188 11:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.188 ************************************ 00:18:49.188 END TEST raid5f_rebuild_test_sb 00:18:49.188 ************************************ 00:18:49.448 11:28:32 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:18:49.448 11:28:32 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:18:49.448 11:28:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:49.448 11:28:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:49.448 11:28:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:49.448 ************************************ 00:18:49.448 START TEST raid_state_function_test_sb_4k 00:18:49.448 ************************************ 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86149 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86149' 00:18:49.448 Process raid pid: 86149 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86149 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86149 ']' 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.448 11:28:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:49.448 [2024-11-20 11:28:32.487280] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:18:49.448 [2024-11-20 11:28:32.487447] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.708 [2024-11-20 11:28:32.670128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.708 [2024-11-20 11:28:32.802273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.967 [2024-11-20 11:28:33.023100] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:49.967 [2024-11-20 11:28:33.023155] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:50.539 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.539 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:18:50.539 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:50.539 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.539 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:50.539 [2024-11-20 11:28:33.362784] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:50.539 [2024-11-20 11:28:33.362842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:50.539 [2024-11-20 11:28:33.362852] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:50.539 [2024-11-20 11:28:33.362879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:50.539 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.539 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:50.539 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:50.539 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:50.539 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.539 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.539 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:50.539 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.539 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.539 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.539 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.539 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.539 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.539 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.539 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:50.539 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.539 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.539 "name": "Existed_Raid", 00:18:50.539 "uuid": "a7db8fd0-f92c-4b9a-9e9a-023ebf3696c9", 00:18:50.539 "strip_size_kb": 0, 00:18:50.539 "state": "configuring", 00:18:50.539 "raid_level": "raid1", 00:18:50.539 "superblock": true, 00:18:50.539 "num_base_bdevs": 2, 00:18:50.539 "num_base_bdevs_discovered": 0, 00:18:50.539 "num_base_bdevs_operational": 2, 00:18:50.539 "base_bdevs_list": [ 00:18:50.539 { 00:18:50.539 "name": "BaseBdev1", 00:18:50.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.540 "is_configured": false, 00:18:50.540 "data_offset": 0, 00:18:50.540 "data_size": 0 00:18:50.540 }, 00:18:50.540 { 00:18:50.540 "name": "BaseBdev2", 00:18:50.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.540 "is_configured": false, 00:18:50.540 "data_offset": 0, 00:18:50.540 "data_size": 0 00:18:50.540 } 00:18:50.540 ] 00:18:50.540 }' 00:18:50.540 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.540 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:50.807 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:50.807 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.807 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:50.807 [2024-11-20 11:28:33.853909] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:50.807 [2024-11-20 11:28:33.853949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:50.807 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.807 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:50.807 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.807 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:50.807 [2024-11-20 11:28:33.865873] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:50.807 [2024-11-20 11:28:33.865918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:50.807 [2024-11-20 11:28:33.865928] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:50.807 [2024-11-20 11:28:33.865940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:50.807 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.807 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:18:50.807 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.807 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:50.807 [2024-11-20 11:28:33.913259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:50.807 BaseBdev1 00:18:50.807 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.807 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:50.807 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:50.807 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:50.807 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:18:50.807 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:50.807 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:50.807 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:50.807 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.807 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:51.067 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.067 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:51.067 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.067 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:51.067 [ 00:18:51.067 { 00:18:51.067 "name": "BaseBdev1", 00:18:51.067 "aliases": [ 00:18:51.067 "b5ab553d-1d41-4c56-90fa-866de62ad708" 00:18:51.067 ], 00:18:51.067 "product_name": "Malloc disk", 00:18:51.067 "block_size": 4096, 00:18:51.067 "num_blocks": 8192, 00:18:51.067 "uuid": "b5ab553d-1d41-4c56-90fa-866de62ad708", 00:18:51.067 "assigned_rate_limits": { 00:18:51.067 "rw_ios_per_sec": 0, 00:18:51.067 "rw_mbytes_per_sec": 0, 00:18:51.067 "r_mbytes_per_sec": 0, 00:18:51.067 "w_mbytes_per_sec": 0 00:18:51.067 }, 00:18:51.067 "claimed": true, 00:18:51.067 "claim_type": "exclusive_write", 00:18:51.067 "zoned": false, 00:18:51.067 "supported_io_types": { 00:18:51.067 "read": true, 00:18:51.067 "write": true, 00:18:51.067 "unmap": true, 00:18:51.067 "flush": true, 00:18:51.067 "reset": true, 00:18:51.067 "nvme_admin": false, 00:18:51.067 "nvme_io": false, 00:18:51.067 "nvme_io_md": false, 00:18:51.067 "write_zeroes": true, 00:18:51.067 "zcopy": true, 00:18:51.067 "get_zone_info": false, 00:18:51.067 "zone_management": false, 00:18:51.067 "zone_append": false, 00:18:51.067 "compare": false, 00:18:51.067 "compare_and_write": false, 00:18:51.067 "abort": true, 00:18:51.067 "seek_hole": false, 00:18:51.067 "seek_data": false, 00:18:51.067 "copy": true, 00:18:51.067 "nvme_iov_md": false 00:18:51.067 }, 00:18:51.067 "memory_domains": [ 00:18:51.067 { 00:18:51.067 "dma_device_id": "system", 00:18:51.067 "dma_device_type": 1 00:18:51.067 }, 00:18:51.067 { 00:18:51.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.067 "dma_device_type": 2 00:18:51.067 } 00:18:51.067 ], 00:18:51.067 "driver_specific": {} 00:18:51.067 } 00:18:51.067 ] 00:18:51.067 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.067 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:18:51.067 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:51.067 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:51.067 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:51.067 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.067 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.067 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:51.067 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.067 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.067 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.067 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.067 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.067 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.067 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.067 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:51.067 11:28:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.067 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.067 "name": "Existed_Raid", 00:18:51.067 "uuid": "8b48e8a1-1f45-4e17-ad39-c5671fb186c3", 00:18:51.067 "strip_size_kb": 0, 00:18:51.067 "state": "configuring", 00:18:51.067 "raid_level": "raid1", 00:18:51.067 "superblock": true, 00:18:51.067 "num_base_bdevs": 2, 00:18:51.067 "num_base_bdevs_discovered": 1, 00:18:51.067 "num_base_bdevs_operational": 2, 00:18:51.067 "base_bdevs_list": [ 00:18:51.067 { 00:18:51.067 "name": "BaseBdev1", 00:18:51.067 "uuid": "b5ab553d-1d41-4c56-90fa-866de62ad708", 00:18:51.067 "is_configured": true, 00:18:51.067 "data_offset": 256, 00:18:51.067 "data_size": 7936 00:18:51.067 }, 00:18:51.067 { 00:18:51.067 "name": "BaseBdev2", 00:18:51.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.067 "is_configured": false, 00:18:51.067 "data_offset": 0, 00:18:51.067 "data_size": 0 00:18:51.067 } 00:18:51.067 ] 00:18:51.067 }' 00:18:51.067 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.067 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:51.328 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:51.328 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.328 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:51.328 [2024-11-20 11:28:34.416500] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:51.328 [2024-11-20 11:28:34.416557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:51.328 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.328 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:51.328 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.328 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:51.328 [2024-11-20 11:28:34.428532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:51.328 [2024-11-20 11:28:34.430497] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:51.328 [2024-11-20 11:28:34.430537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:51.328 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.328 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:51.328 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:51.328 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:51.328 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:51.328 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:51.328 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.328 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.328 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:51.328 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.328 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.328 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.328 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.328 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.328 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.587 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.587 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:51.587 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.587 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.587 "name": "Existed_Raid", 00:18:51.587 "uuid": "69d894ef-e4cb-4cb6-95f9-27f8909ae131", 00:18:51.587 "strip_size_kb": 0, 00:18:51.587 "state": "configuring", 00:18:51.587 "raid_level": "raid1", 00:18:51.587 "superblock": true, 00:18:51.587 "num_base_bdevs": 2, 00:18:51.587 "num_base_bdevs_discovered": 1, 00:18:51.587 "num_base_bdevs_operational": 2, 00:18:51.587 "base_bdevs_list": [ 00:18:51.587 { 00:18:51.587 "name": "BaseBdev1", 00:18:51.587 "uuid": "b5ab553d-1d41-4c56-90fa-866de62ad708", 00:18:51.587 "is_configured": true, 00:18:51.587 "data_offset": 256, 00:18:51.587 "data_size": 7936 00:18:51.587 }, 00:18:51.587 { 00:18:51.587 "name": "BaseBdev2", 00:18:51.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.587 "is_configured": false, 00:18:51.587 "data_offset": 0, 00:18:51.587 "data_size": 0 00:18:51.587 } 00:18:51.587 ] 00:18:51.587 }' 00:18:51.587 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.587 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:51.847 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:18:51.847 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.847 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:51.847 [2024-11-20 11:28:34.914724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:51.847 [2024-11-20 11:28:34.915016] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:51.847 [2024-11-20 11:28:34.915038] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:51.847 [2024-11-20 11:28:34.915334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:51.847 BaseBdev2 00:18:51.847 [2024-11-20 11:28:34.915548] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:51.847 [2024-11-20 11:28:34.915565] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:51.847 [2024-11-20 11:28:34.915741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.847 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.847 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:51.847 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:51.847 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:51.847 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:18:51.847 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:51.847 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:51.847 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:51.848 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.848 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:51.848 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.848 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:51.848 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.848 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:51.848 [ 00:18:51.848 { 00:18:51.848 "name": "BaseBdev2", 00:18:51.848 "aliases": [ 00:18:51.848 "ff6edcb5-623a-4188-85b5-d836e78fecd0" 00:18:51.848 ], 00:18:51.848 "product_name": "Malloc disk", 00:18:51.848 "block_size": 4096, 00:18:51.848 "num_blocks": 8192, 00:18:51.848 "uuid": "ff6edcb5-623a-4188-85b5-d836e78fecd0", 00:18:51.848 "assigned_rate_limits": { 00:18:51.848 "rw_ios_per_sec": 0, 00:18:51.848 "rw_mbytes_per_sec": 0, 00:18:51.848 "r_mbytes_per_sec": 0, 00:18:51.848 "w_mbytes_per_sec": 0 00:18:51.848 }, 00:18:51.848 "claimed": true, 00:18:51.848 "claim_type": "exclusive_write", 00:18:51.848 "zoned": false, 00:18:51.848 "supported_io_types": { 00:18:51.848 "read": true, 00:18:51.848 "write": true, 00:18:51.848 "unmap": true, 00:18:51.848 "flush": true, 00:18:51.848 "reset": true, 00:18:51.848 "nvme_admin": false, 00:18:51.848 "nvme_io": false, 00:18:51.848 "nvme_io_md": false, 00:18:51.848 "write_zeroes": true, 00:18:51.848 "zcopy": true, 00:18:51.848 "get_zone_info": false, 00:18:51.848 "zone_management": false, 00:18:51.848 "zone_append": false, 00:18:51.848 "compare": false, 00:18:51.848 "compare_and_write": false, 00:18:51.848 "abort": true, 00:18:51.848 "seek_hole": false, 00:18:51.848 "seek_data": false, 00:18:51.848 "copy": true, 00:18:51.848 "nvme_iov_md": false 00:18:51.848 }, 00:18:51.848 "memory_domains": [ 00:18:51.848 { 00:18:51.848 "dma_device_id": "system", 00:18:51.848 "dma_device_type": 1 00:18:51.848 }, 00:18:51.848 { 00:18:51.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.848 "dma_device_type": 2 00:18:51.848 } 00:18:51.848 ], 00:18:51.848 "driver_specific": {} 00:18:51.848 } 00:18:51.848 ] 00:18:51.848 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.848 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:18:51.848 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:51.848 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:51.848 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:51.848 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:51.848 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.848 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.848 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.848 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:51.848 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.848 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.848 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.848 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.848 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.848 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.848 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.848 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.106 11:28:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.106 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.106 "name": "Existed_Raid", 00:18:52.106 "uuid": "69d894ef-e4cb-4cb6-95f9-27f8909ae131", 00:18:52.106 "strip_size_kb": 0, 00:18:52.106 "state": "online", 00:18:52.106 "raid_level": "raid1", 00:18:52.106 "superblock": true, 00:18:52.106 "num_base_bdevs": 2, 00:18:52.106 "num_base_bdevs_discovered": 2, 00:18:52.106 "num_base_bdevs_operational": 2, 00:18:52.106 "base_bdevs_list": [ 00:18:52.106 { 00:18:52.106 "name": "BaseBdev1", 00:18:52.106 "uuid": "b5ab553d-1d41-4c56-90fa-866de62ad708", 00:18:52.106 "is_configured": true, 00:18:52.106 "data_offset": 256, 00:18:52.106 "data_size": 7936 00:18:52.106 }, 00:18:52.106 { 00:18:52.106 "name": "BaseBdev2", 00:18:52.106 "uuid": "ff6edcb5-623a-4188-85b5-d836e78fecd0", 00:18:52.106 "is_configured": true, 00:18:52.106 "data_offset": 256, 00:18:52.106 "data_size": 7936 00:18:52.106 } 00:18:52.106 ] 00:18:52.106 }' 00:18:52.106 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.106 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.365 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:52.365 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:52.365 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:52.365 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:52.365 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:52.365 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:52.365 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:52.365 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:52.365 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.365 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.365 [2024-11-20 11:28:35.418284] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:52.365 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.365 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:52.365 "name": "Existed_Raid", 00:18:52.365 "aliases": [ 00:18:52.365 "69d894ef-e4cb-4cb6-95f9-27f8909ae131" 00:18:52.365 ], 00:18:52.365 "product_name": "Raid Volume", 00:18:52.365 "block_size": 4096, 00:18:52.365 "num_blocks": 7936, 00:18:52.365 "uuid": "69d894ef-e4cb-4cb6-95f9-27f8909ae131", 00:18:52.365 "assigned_rate_limits": { 00:18:52.365 "rw_ios_per_sec": 0, 00:18:52.365 "rw_mbytes_per_sec": 0, 00:18:52.365 "r_mbytes_per_sec": 0, 00:18:52.365 "w_mbytes_per_sec": 0 00:18:52.365 }, 00:18:52.365 "claimed": false, 00:18:52.365 "zoned": false, 00:18:52.365 "supported_io_types": { 00:18:52.365 "read": true, 00:18:52.365 "write": true, 00:18:52.365 "unmap": false, 00:18:52.365 "flush": false, 00:18:52.365 "reset": true, 00:18:52.365 "nvme_admin": false, 00:18:52.365 "nvme_io": false, 00:18:52.365 "nvme_io_md": false, 00:18:52.365 "write_zeroes": true, 00:18:52.365 "zcopy": false, 00:18:52.365 "get_zone_info": false, 00:18:52.365 "zone_management": false, 00:18:52.365 "zone_append": false, 00:18:52.365 "compare": false, 00:18:52.365 "compare_and_write": false, 00:18:52.365 "abort": false, 00:18:52.365 "seek_hole": false, 00:18:52.365 "seek_data": false, 00:18:52.365 "copy": false, 00:18:52.365 "nvme_iov_md": false 00:18:52.365 }, 00:18:52.365 "memory_domains": [ 00:18:52.365 { 00:18:52.365 "dma_device_id": "system", 00:18:52.365 "dma_device_type": 1 00:18:52.365 }, 00:18:52.365 { 00:18:52.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.365 "dma_device_type": 2 00:18:52.365 }, 00:18:52.365 { 00:18:52.365 "dma_device_id": "system", 00:18:52.365 "dma_device_type": 1 00:18:52.365 }, 00:18:52.365 { 00:18:52.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.365 "dma_device_type": 2 00:18:52.365 } 00:18:52.365 ], 00:18:52.365 "driver_specific": { 00:18:52.365 "raid": { 00:18:52.365 "uuid": "69d894ef-e4cb-4cb6-95f9-27f8909ae131", 00:18:52.365 "strip_size_kb": 0, 00:18:52.365 "state": "online", 00:18:52.365 "raid_level": "raid1", 00:18:52.365 "superblock": true, 00:18:52.365 "num_base_bdevs": 2, 00:18:52.365 "num_base_bdevs_discovered": 2, 00:18:52.365 "num_base_bdevs_operational": 2, 00:18:52.365 "base_bdevs_list": [ 00:18:52.365 { 00:18:52.365 "name": "BaseBdev1", 00:18:52.365 "uuid": "b5ab553d-1d41-4c56-90fa-866de62ad708", 00:18:52.365 "is_configured": true, 00:18:52.365 "data_offset": 256, 00:18:52.365 "data_size": 7936 00:18:52.365 }, 00:18:52.365 { 00:18:52.365 "name": "BaseBdev2", 00:18:52.365 "uuid": "ff6edcb5-623a-4188-85b5-d836e78fecd0", 00:18:52.365 "is_configured": true, 00:18:52.365 "data_offset": 256, 00:18:52.365 "data_size": 7936 00:18:52.365 } 00:18:52.365 ] 00:18:52.365 } 00:18:52.365 } 00:18:52.365 }' 00:18:52.365 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:52.624 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:52.624 BaseBdev2' 00:18:52.624 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:52.624 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:52.624 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:52.624 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:52.624 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:52.624 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.624 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.624 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.624 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:52.624 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:52.624 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:52.624 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:52.624 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.624 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.624 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:52.624 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.624 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:52.624 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:52.624 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:52.624 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.624 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.624 [2024-11-20 11:28:35.649667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:52.888 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.888 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:52.888 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:52.888 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:52.888 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:52.888 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:52.888 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:52.888 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:52.888 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.888 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:52.888 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:52.888 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:52.888 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.888 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.888 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.888 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.888 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.888 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.888 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.888 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.888 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.888 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.888 "name": "Existed_Raid", 00:18:52.888 "uuid": "69d894ef-e4cb-4cb6-95f9-27f8909ae131", 00:18:52.888 "strip_size_kb": 0, 00:18:52.888 "state": "online", 00:18:52.888 "raid_level": "raid1", 00:18:52.888 "superblock": true, 00:18:52.888 "num_base_bdevs": 2, 00:18:52.888 "num_base_bdevs_discovered": 1, 00:18:52.888 "num_base_bdevs_operational": 1, 00:18:52.888 "base_bdevs_list": [ 00:18:52.888 { 00:18:52.888 "name": null, 00:18:52.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.888 "is_configured": false, 00:18:52.888 "data_offset": 0, 00:18:52.888 "data_size": 7936 00:18:52.888 }, 00:18:52.888 { 00:18:52.888 "name": "BaseBdev2", 00:18:52.888 "uuid": "ff6edcb5-623a-4188-85b5-d836e78fecd0", 00:18:52.888 "is_configured": true, 00:18:52.888 "data_offset": 256, 00:18:52.888 "data_size": 7936 00:18:52.888 } 00:18:52.888 ] 00:18:52.888 }' 00:18:52.888 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.888 11:28:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.153 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:53.153 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:53.153 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:53.153 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.153 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.153 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.153 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.153 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:53.153 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:53.153 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:53.153 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.153 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.153 [2024-11-20 11:28:36.251862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:53.153 [2024-11-20 11:28:36.251984] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:53.413 [2024-11-20 11:28:36.350769] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:53.413 [2024-11-20 11:28:36.350824] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:53.413 [2024-11-20 11:28:36.350836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:53.413 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.413 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:53.413 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:53.413 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.413 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:53.413 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.413 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.413 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.413 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:53.413 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:53.413 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:53.413 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86149 00:18:53.413 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86149 ']' 00:18:53.413 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86149 00:18:53.413 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:18:53.413 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:53.413 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86149 00:18:53.413 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:53.413 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:53.413 killing process with pid 86149 00:18:53.413 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86149' 00:18:53.413 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86149 00:18:53.413 [2024-11-20 11:28:36.444270] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:53.413 11:28:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86149 00:18:53.413 [2024-11-20 11:28:36.464671] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:54.792 11:28:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:18:54.792 00:18:54.792 real 0m5.246s 00:18:54.792 user 0m7.562s 00:18:54.792 sys 0m0.916s 00:18:54.792 11:28:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.792 11:28:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:54.792 ************************************ 00:18:54.792 END TEST raid_state_function_test_sb_4k 00:18:54.792 ************************************ 00:18:54.792 11:28:37 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:18:54.792 11:28:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:54.792 11:28:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:54.792 11:28:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:54.792 ************************************ 00:18:54.792 START TEST raid_superblock_test_4k 00:18:54.792 ************************************ 00:18:54.792 11:28:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:54.792 11:28:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:54.792 11:28:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:54.792 11:28:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:54.792 11:28:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:54.792 11:28:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:54.792 11:28:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:54.792 11:28:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:54.792 11:28:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:54.792 11:28:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:54.792 11:28:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:54.792 11:28:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:54.792 11:28:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:54.792 11:28:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:54.792 11:28:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:54.792 11:28:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:54.792 11:28:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86403 00:18:54.792 11:28:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:54.792 11:28:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86403 00:18:54.792 11:28:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86403 ']' 00:18:54.792 11:28:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.792 11:28:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:54.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.792 11:28:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.792 11:28:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:54.792 11:28:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:54.792 [2024-11-20 11:28:37.762585] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:18:54.792 [2024-11-20 11:28:37.762727] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86403 ] 00:18:55.051 [2024-11-20 11:28:37.917728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.051 [2024-11-20 11:28:38.039607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.310 [2024-11-20 11:28:38.254873] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:55.310 [2024-11-20 11:28:38.254926] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.570 malloc1 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.570 [2024-11-20 11:28:38.656202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:55.570 [2024-11-20 11:28:38.656271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.570 [2024-11-20 11:28:38.656296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:55.570 [2024-11-20 11:28:38.656306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.570 [2024-11-20 11:28:38.658415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.570 [2024-11-20 11:28:38.658466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:55.570 pt1 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.570 11:28:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.831 malloc2 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.831 [2024-11-20 11:28:38.717886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:55.831 [2024-11-20 11:28:38.717952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.831 [2024-11-20 11:28:38.717977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:55.831 [2024-11-20 11:28:38.717987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.831 [2024-11-20 11:28:38.720522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.831 [2024-11-20 11:28:38.720563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:55.831 pt2 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.831 [2024-11-20 11:28:38.729929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:55.831 [2024-11-20 11:28:38.731941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:55.831 [2024-11-20 11:28:38.732138] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:55.831 [2024-11-20 11:28:38.732164] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:55.831 [2024-11-20 11:28:38.732434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:55.831 [2024-11-20 11:28:38.732636] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:55.831 [2024-11-20 11:28:38.732660] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:55.831 [2024-11-20 11:28:38.732842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.831 "name": "raid_bdev1", 00:18:55.831 "uuid": "655604aa-d62a-4b81-a00c-b2369baa9f25", 00:18:55.831 "strip_size_kb": 0, 00:18:55.831 "state": "online", 00:18:55.831 "raid_level": "raid1", 00:18:55.831 "superblock": true, 00:18:55.831 "num_base_bdevs": 2, 00:18:55.831 "num_base_bdevs_discovered": 2, 00:18:55.831 "num_base_bdevs_operational": 2, 00:18:55.831 "base_bdevs_list": [ 00:18:55.831 { 00:18:55.831 "name": "pt1", 00:18:55.831 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:55.831 "is_configured": true, 00:18:55.831 "data_offset": 256, 00:18:55.831 "data_size": 7936 00:18:55.831 }, 00:18:55.831 { 00:18:55.831 "name": "pt2", 00:18:55.831 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:55.831 "is_configured": true, 00:18:55.831 "data_offset": 256, 00:18:55.831 "data_size": 7936 00:18:55.831 } 00:18:55.831 ] 00:18:55.831 }' 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.831 11:28:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.091 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:56.091 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:56.091 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:56.091 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:56.091 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:56.091 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:56.091 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:56.091 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.091 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:56.091 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.091 [2024-11-20 11:28:39.201436] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:56.349 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.349 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:56.349 "name": "raid_bdev1", 00:18:56.349 "aliases": [ 00:18:56.349 "655604aa-d62a-4b81-a00c-b2369baa9f25" 00:18:56.349 ], 00:18:56.349 "product_name": "Raid Volume", 00:18:56.349 "block_size": 4096, 00:18:56.349 "num_blocks": 7936, 00:18:56.349 "uuid": "655604aa-d62a-4b81-a00c-b2369baa9f25", 00:18:56.349 "assigned_rate_limits": { 00:18:56.349 "rw_ios_per_sec": 0, 00:18:56.349 "rw_mbytes_per_sec": 0, 00:18:56.349 "r_mbytes_per_sec": 0, 00:18:56.349 "w_mbytes_per_sec": 0 00:18:56.349 }, 00:18:56.349 "claimed": false, 00:18:56.349 "zoned": false, 00:18:56.349 "supported_io_types": { 00:18:56.349 "read": true, 00:18:56.349 "write": true, 00:18:56.349 "unmap": false, 00:18:56.349 "flush": false, 00:18:56.349 "reset": true, 00:18:56.349 "nvme_admin": false, 00:18:56.349 "nvme_io": false, 00:18:56.349 "nvme_io_md": false, 00:18:56.349 "write_zeroes": true, 00:18:56.349 "zcopy": false, 00:18:56.349 "get_zone_info": false, 00:18:56.349 "zone_management": false, 00:18:56.349 "zone_append": false, 00:18:56.349 "compare": false, 00:18:56.349 "compare_and_write": false, 00:18:56.349 "abort": false, 00:18:56.349 "seek_hole": false, 00:18:56.349 "seek_data": false, 00:18:56.349 "copy": false, 00:18:56.349 "nvme_iov_md": false 00:18:56.349 }, 00:18:56.349 "memory_domains": [ 00:18:56.349 { 00:18:56.349 "dma_device_id": "system", 00:18:56.349 "dma_device_type": 1 00:18:56.349 }, 00:18:56.349 { 00:18:56.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.349 "dma_device_type": 2 00:18:56.349 }, 00:18:56.349 { 00:18:56.349 "dma_device_id": "system", 00:18:56.349 "dma_device_type": 1 00:18:56.349 }, 00:18:56.349 { 00:18:56.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.349 "dma_device_type": 2 00:18:56.349 } 00:18:56.349 ], 00:18:56.349 "driver_specific": { 00:18:56.349 "raid": { 00:18:56.349 "uuid": "655604aa-d62a-4b81-a00c-b2369baa9f25", 00:18:56.349 "strip_size_kb": 0, 00:18:56.349 "state": "online", 00:18:56.349 "raid_level": "raid1", 00:18:56.349 "superblock": true, 00:18:56.349 "num_base_bdevs": 2, 00:18:56.349 "num_base_bdevs_discovered": 2, 00:18:56.349 "num_base_bdevs_operational": 2, 00:18:56.349 "base_bdevs_list": [ 00:18:56.349 { 00:18:56.349 "name": "pt1", 00:18:56.349 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:56.349 "is_configured": true, 00:18:56.349 "data_offset": 256, 00:18:56.349 "data_size": 7936 00:18:56.349 }, 00:18:56.349 { 00:18:56.349 "name": "pt2", 00:18:56.349 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:56.349 "is_configured": true, 00:18:56.349 "data_offset": 256, 00:18:56.349 "data_size": 7936 00:18:56.349 } 00:18:56.349 ] 00:18:56.349 } 00:18:56.349 } 00:18:56.349 }' 00:18:56.349 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:56.349 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:56.349 pt2' 00:18:56.349 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:56.349 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:56.349 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:56.349 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:56.349 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:56.349 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.349 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.349 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.349 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:56.349 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:56.349 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:56.350 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:56.350 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.350 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.350 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:56.350 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.350 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:56.350 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:56.350 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:56.350 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:56.350 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.350 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.350 [2024-11-20 11:28:39.456999] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:56.607 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.607 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=655604aa-d62a-4b81-a00c-b2369baa9f25 00:18:56.607 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 655604aa-d62a-4b81-a00c-b2369baa9f25 ']' 00:18:56.607 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:56.607 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.607 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.607 [2024-11-20 11:28:39.504596] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:56.607 [2024-11-20 11:28:39.504628] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:56.607 [2024-11-20 11:28:39.504720] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:56.607 [2024-11-20 11:28:39.504800] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:56.607 [2024-11-20 11:28:39.504818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:56.607 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.608 [2024-11-20 11:28:39.616417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:56.608 [2024-11-20 11:28:39.618349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:56.608 [2024-11-20 11:28:39.618429] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:56.608 [2024-11-20 11:28:39.618517] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:56.608 [2024-11-20 11:28:39.618539] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:56.608 [2024-11-20 11:28:39.618550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:56.608 request: 00:18:56.608 { 00:18:56.608 "name": "raid_bdev1", 00:18:56.608 "raid_level": "raid1", 00:18:56.608 "base_bdevs": [ 00:18:56.608 "malloc1", 00:18:56.608 "malloc2" 00:18:56.608 ], 00:18:56.608 "superblock": false, 00:18:56.608 "method": "bdev_raid_create", 00:18:56.608 "req_id": 1 00:18:56.608 } 00:18:56.608 Got JSON-RPC error response 00:18:56.608 response: 00:18:56.608 { 00:18:56.608 "code": -17, 00:18:56.608 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:56.608 } 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.608 [2024-11-20 11:28:39.680305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:56.608 [2024-11-20 11:28:39.680374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.608 [2024-11-20 11:28:39.680392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:56.608 [2024-11-20 11:28:39.680404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.608 [2024-11-20 11:28:39.682824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.608 [2024-11-20 11:28:39.682868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:56.608 [2024-11-20 11:28:39.682962] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:56.608 [2024-11-20 11:28:39.683052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:56.608 pt1 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.608 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.609 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.609 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.609 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.609 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.609 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.609 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.609 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.867 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.867 "name": "raid_bdev1", 00:18:56.867 "uuid": "655604aa-d62a-4b81-a00c-b2369baa9f25", 00:18:56.867 "strip_size_kb": 0, 00:18:56.867 "state": "configuring", 00:18:56.867 "raid_level": "raid1", 00:18:56.867 "superblock": true, 00:18:56.867 "num_base_bdevs": 2, 00:18:56.867 "num_base_bdevs_discovered": 1, 00:18:56.867 "num_base_bdevs_operational": 2, 00:18:56.867 "base_bdevs_list": [ 00:18:56.867 { 00:18:56.867 "name": "pt1", 00:18:56.867 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:56.867 "is_configured": true, 00:18:56.867 "data_offset": 256, 00:18:56.867 "data_size": 7936 00:18:56.867 }, 00:18:56.867 { 00:18:56.867 "name": null, 00:18:56.867 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:56.867 "is_configured": false, 00:18:56.867 "data_offset": 256, 00:18:56.867 "data_size": 7936 00:18:56.867 } 00:18:56.867 ] 00:18:56.867 }' 00:18:56.867 11:28:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.867 11:28:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.127 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:57.127 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:57.127 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:57.127 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:57.127 11:28:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.127 11:28:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.127 [2024-11-20 11:28:40.215709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:57.127 [2024-11-20 11:28:40.215826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.127 [2024-11-20 11:28:40.215852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:57.127 [2024-11-20 11:28:40.215865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.127 [2024-11-20 11:28:40.216412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.127 [2024-11-20 11:28:40.216447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:57.127 [2024-11-20 11:28:40.216574] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:57.127 [2024-11-20 11:28:40.216618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:57.127 [2024-11-20 11:28:40.216767] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:57.127 [2024-11-20 11:28:40.216789] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:57.127 [2024-11-20 11:28:40.217078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:57.127 [2024-11-20 11:28:40.217274] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:57.127 [2024-11-20 11:28:40.217295] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:57.127 [2024-11-20 11:28:40.217498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:57.127 pt2 00:18:57.127 11:28:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.127 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:57.127 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:57.127 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:57.127 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.127 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.127 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.127 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.127 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:57.127 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.127 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.127 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.127 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.127 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.127 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.127 11:28:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.127 11:28:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.386 11:28:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.386 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.386 "name": "raid_bdev1", 00:18:57.386 "uuid": "655604aa-d62a-4b81-a00c-b2369baa9f25", 00:18:57.386 "strip_size_kb": 0, 00:18:57.386 "state": "online", 00:18:57.386 "raid_level": "raid1", 00:18:57.386 "superblock": true, 00:18:57.386 "num_base_bdevs": 2, 00:18:57.386 "num_base_bdevs_discovered": 2, 00:18:57.386 "num_base_bdevs_operational": 2, 00:18:57.386 "base_bdevs_list": [ 00:18:57.386 { 00:18:57.386 "name": "pt1", 00:18:57.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:57.386 "is_configured": true, 00:18:57.386 "data_offset": 256, 00:18:57.386 "data_size": 7936 00:18:57.386 }, 00:18:57.386 { 00:18:57.386 "name": "pt2", 00:18:57.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:57.386 "is_configured": true, 00:18:57.386 "data_offset": 256, 00:18:57.386 "data_size": 7936 00:18:57.386 } 00:18:57.386 ] 00:18:57.386 }' 00:18:57.386 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.386 11:28:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.645 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:57.645 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:57.645 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:57.645 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:57.645 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:57.645 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:57.645 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:57.645 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:57.645 11:28:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.645 11:28:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.645 [2024-11-20 11:28:40.711328] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:57.645 11:28:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.645 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:57.645 "name": "raid_bdev1", 00:18:57.645 "aliases": [ 00:18:57.645 "655604aa-d62a-4b81-a00c-b2369baa9f25" 00:18:57.645 ], 00:18:57.645 "product_name": "Raid Volume", 00:18:57.645 "block_size": 4096, 00:18:57.645 "num_blocks": 7936, 00:18:57.645 "uuid": "655604aa-d62a-4b81-a00c-b2369baa9f25", 00:18:57.645 "assigned_rate_limits": { 00:18:57.645 "rw_ios_per_sec": 0, 00:18:57.645 "rw_mbytes_per_sec": 0, 00:18:57.645 "r_mbytes_per_sec": 0, 00:18:57.645 "w_mbytes_per_sec": 0 00:18:57.645 }, 00:18:57.645 "claimed": false, 00:18:57.645 "zoned": false, 00:18:57.645 "supported_io_types": { 00:18:57.645 "read": true, 00:18:57.645 "write": true, 00:18:57.645 "unmap": false, 00:18:57.645 "flush": false, 00:18:57.645 "reset": true, 00:18:57.645 "nvme_admin": false, 00:18:57.645 "nvme_io": false, 00:18:57.645 "nvme_io_md": false, 00:18:57.645 "write_zeroes": true, 00:18:57.645 "zcopy": false, 00:18:57.645 "get_zone_info": false, 00:18:57.645 "zone_management": false, 00:18:57.645 "zone_append": false, 00:18:57.645 "compare": false, 00:18:57.645 "compare_and_write": false, 00:18:57.645 "abort": false, 00:18:57.645 "seek_hole": false, 00:18:57.645 "seek_data": false, 00:18:57.645 "copy": false, 00:18:57.645 "nvme_iov_md": false 00:18:57.645 }, 00:18:57.645 "memory_domains": [ 00:18:57.645 { 00:18:57.645 "dma_device_id": "system", 00:18:57.646 "dma_device_type": 1 00:18:57.646 }, 00:18:57.646 { 00:18:57.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.646 "dma_device_type": 2 00:18:57.646 }, 00:18:57.646 { 00:18:57.646 "dma_device_id": "system", 00:18:57.646 "dma_device_type": 1 00:18:57.646 }, 00:18:57.646 { 00:18:57.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.646 "dma_device_type": 2 00:18:57.646 } 00:18:57.646 ], 00:18:57.646 "driver_specific": { 00:18:57.646 "raid": { 00:18:57.646 "uuid": "655604aa-d62a-4b81-a00c-b2369baa9f25", 00:18:57.646 "strip_size_kb": 0, 00:18:57.646 "state": "online", 00:18:57.646 "raid_level": "raid1", 00:18:57.646 "superblock": true, 00:18:57.646 "num_base_bdevs": 2, 00:18:57.646 "num_base_bdevs_discovered": 2, 00:18:57.646 "num_base_bdevs_operational": 2, 00:18:57.646 "base_bdevs_list": [ 00:18:57.646 { 00:18:57.646 "name": "pt1", 00:18:57.646 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:57.646 "is_configured": true, 00:18:57.646 "data_offset": 256, 00:18:57.646 "data_size": 7936 00:18:57.646 }, 00:18:57.646 { 00:18:57.646 "name": "pt2", 00:18:57.646 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:57.646 "is_configured": true, 00:18:57.646 "data_offset": 256, 00:18:57.646 "data_size": 7936 00:18:57.646 } 00:18:57.646 ] 00:18:57.646 } 00:18:57.646 } 00:18:57.646 }' 00:18:57.646 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:57.905 pt2' 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.905 [2024-11-20 11:28:40.919009] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 655604aa-d62a-4b81-a00c-b2369baa9f25 '!=' 655604aa-d62a-4b81-a00c-b2369baa9f25 ']' 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.905 [2024-11-20 11:28:40.966694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.905 11:28:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.164 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.164 "name": "raid_bdev1", 00:18:58.164 "uuid": "655604aa-d62a-4b81-a00c-b2369baa9f25", 00:18:58.164 "strip_size_kb": 0, 00:18:58.164 "state": "online", 00:18:58.164 "raid_level": "raid1", 00:18:58.164 "superblock": true, 00:18:58.164 "num_base_bdevs": 2, 00:18:58.164 "num_base_bdevs_discovered": 1, 00:18:58.164 "num_base_bdevs_operational": 1, 00:18:58.164 "base_bdevs_list": [ 00:18:58.164 { 00:18:58.164 "name": null, 00:18:58.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.164 "is_configured": false, 00:18:58.164 "data_offset": 0, 00:18:58.164 "data_size": 7936 00:18:58.164 }, 00:18:58.164 { 00:18:58.164 "name": "pt2", 00:18:58.164 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:58.164 "is_configured": true, 00:18:58.164 "data_offset": 256, 00:18:58.164 "data_size": 7936 00:18:58.164 } 00:18:58.164 ] 00:18:58.164 }' 00:18:58.164 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.164 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.424 [2024-11-20 11:28:41.413873] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:58.424 [2024-11-20 11:28:41.413906] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:58.424 [2024-11-20 11:28:41.413989] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:58.424 [2024-11-20 11:28:41.414041] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:58.424 [2024-11-20 11:28:41.414056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.424 [2024-11-20 11:28:41.481726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:58.424 [2024-11-20 11:28:41.481791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.424 [2024-11-20 11:28:41.481810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:58.424 [2024-11-20 11:28:41.481821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.424 [2024-11-20 11:28:41.484063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.424 [2024-11-20 11:28:41.484107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:58.424 [2024-11-20 11:28:41.484197] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:58.424 [2024-11-20 11:28:41.484258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:58.424 [2024-11-20 11:28:41.484374] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:58.424 [2024-11-20 11:28:41.484392] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:58.424 [2024-11-20 11:28:41.484651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:58.424 [2024-11-20 11:28:41.484838] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:58.424 [2024-11-20 11:28:41.484861] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:58.424 [2024-11-20 11:28:41.485012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.424 pt2 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.424 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.425 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.425 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.425 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.425 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.425 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.425 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.425 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.425 "name": "raid_bdev1", 00:18:58.425 "uuid": "655604aa-d62a-4b81-a00c-b2369baa9f25", 00:18:58.425 "strip_size_kb": 0, 00:18:58.425 "state": "online", 00:18:58.425 "raid_level": "raid1", 00:18:58.425 "superblock": true, 00:18:58.425 "num_base_bdevs": 2, 00:18:58.425 "num_base_bdevs_discovered": 1, 00:18:58.425 "num_base_bdevs_operational": 1, 00:18:58.425 "base_bdevs_list": [ 00:18:58.425 { 00:18:58.425 "name": null, 00:18:58.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.425 "is_configured": false, 00:18:58.425 "data_offset": 256, 00:18:58.425 "data_size": 7936 00:18:58.425 }, 00:18:58.425 { 00:18:58.425 "name": "pt2", 00:18:58.425 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:58.425 "is_configured": true, 00:18:58.425 "data_offset": 256, 00:18:58.425 "data_size": 7936 00:18:58.425 } 00:18:58.425 ] 00:18:58.425 }' 00:18:58.425 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.425 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.993 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:58.993 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.993 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.993 [2024-11-20 11:28:41.920953] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:58.994 [2024-11-20 11:28:41.920987] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:58.994 [2024-11-20 11:28:41.921060] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:58.994 [2024-11-20 11:28:41.921111] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:58.994 [2024-11-20 11:28:41.921120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.994 [2024-11-20 11:28:41.984876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:58.994 [2024-11-20 11:28:41.984945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.994 [2024-11-20 11:28:41.984970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:58.994 [2024-11-20 11:28:41.984980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.994 [2024-11-20 11:28:41.987360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.994 [2024-11-20 11:28:41.987398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:58.994 [2024-11-20 11:28:41.987532] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:58.994 [2024-11-20 11:28:41.987584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:58.994 [2024-11-20 11:28:41.987744] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:58.994 [2024-11-20 11:28:41.987764] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:58.994 [2024-11-20 11:28:41.987796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:58.994 [2024-11-20 11:28:41.987889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:58.994 [2024-11-20 11:28:41.987981] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:58.994 [2024-11-20 11:28:41.987991] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:58.994 [2024-11-20 11:28:41.988255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:58.994 [2024-11-20 11:28:41.988432] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:58.994 [2024-11-20 11:28:41.988461] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:58.994 [2024-11-20 11:28:41.988636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.994 pt1 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.994 11:28:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.994 11:28:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.994 11:28:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.994 "name": "raid_bdev1", 00:18:58.994 "uuid": "655604aa-d62a-4b81-a00c-b2369baa9f25", 00:18:58.994 "strip_size_kb": 0, 00:18:58.994 "state": "online", 00:18:58.994 "raid_level": "raid1", 00:18:58.994 "superblock": true, 00:18:58.994 "num_base_bdevs": 2, 00:18:58.994 "num_base_bdevs_discovered": 1, 00:18:58.994 "num_base_bdevs_operational": 1, 00:18:58.994 "base_bdevs_list": [ 00:18:58.994 { 00:18:58.994 "name": null, 00:18:58.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.994 "is_configured": false, 00:18:58.994 "data_offset": 256, 00:18:58.994 "data_size": 7936 00:18:58.994 }, 00:18:58.994 { 00:18:58.994 "name": "pt2", 00:18:58.994 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:58.994 "is_configured": true, 00:18:58.994 "data_offset": 256, 00:18:58.994 "data_size": 7936 00:18:58.994 } 00:18:58.994 ] 00:18:58.994 }' 00:18:58.994 11:28:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.994 11:28:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.561 11:28:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:59.561 11:28:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.561 11:28:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.561 11:28:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:59.561 11:28:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.561 11:28:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:59.561 11:28:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:59.561 11:28:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:59.561 11:28:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.561 11:28:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.561 [2024-11-20 11:28:42.516249] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.561 11:28:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.561 11:28:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 655604aa-d62a-4b81-a00c-b2369baa9f25 '!=' 655604aa-d62a-4b81-a00c-b2369baa9f25 ']' 00:18:59.561 11:28:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86403 00:18:59.561 11:28:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86403 ']' 00:18:59.562 11:28:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86403 00:18:59.562 11:28:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:18:59.562 11:28:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:59.562 11:28:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86403 00:18:59.562 11:28:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:59.562 11:28:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:59.562 killing process with pid 86403 00:18:59.562 11:28:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86403' 00:18:59.562 11:28:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86403 00:18:59.562 [2024-11-20 11:28:42.605283] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:59.562 [2024-11-20 11:28:42.605395] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:59.562 [2024-11-20 11:28:42.605462] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:59.562 [2024-11-20 11:28:42.605479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:59.562 11:28:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86403 00:18:59.822 [2024-11-20 11:28:42.835825] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:01.205 11:28:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:19:01.205 00:19:01.205 real 0m6.403s 00:19:01.205 user 0m9.646s 00:19:01.205 sys 0m1.135s 00:19:01.205 11:28:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:01.205 11:28:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.205 ************************************ 00:19:01.205 END TEST raid_superblock_test_4k 00:19:01.205 ************************************ 00:19:01.205 11:28:44 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:19:01.205 11:28:44 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:19:01.205 11:28:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:01.205 11:28:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:01.205 11:28:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:01.205 ************************************ 00:19:01.205 START TEST raid_rebuild_test_sb_4k 00:19:01.205 ************************************ 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86732 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86732 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86732 ']' 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.205 11:28:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:01.205 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:01.205 Zero copy mechanism will not be used. 00:19:01.205 [2024-11-20 11:28:44.298742] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:19:01.205 [2024-11-20 11:28:44.298880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86732 ] 00:19:01.465 [2024-11-20 11:28:44.478520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.724 [2024-11-20 11:28:44.600194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.724 [2024-11-20 11:28:44.823923] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:01.724 [2024-11-20 11:28:44.823995] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:02.292 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.292 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:19:02.292 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:02.292 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:19:02.292 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.292 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.292 BaseBdev1_malloc 00:19:02.292 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.292 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:02.292 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.292 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.292 [2024-11-20 11:28:45.209195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:02.292 [2024-11-20 11:28:45.209271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.293 [2024-11-20 11:28:45.209297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:02.293 [2024-11-20 11:28:45.209308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.293 [2024-11-20 11:28:45.211404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.293 [2024-11-20 11:28:45.211443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:02.293 BaseBdev1 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.293 BaseBdev2_malloc 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.293 [2024-11-20 11:28:45.267363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:02.293 [2024-11-20 11:28:45.267436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.293 [2024-11-20 11:28:45.267479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:02.293 [2024-11-20 11:28:45.267511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.293 [2024-11-20 11:28:45.269721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.293 [2024-11-20 11:28:45.269762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:02.293 BaseBdev2 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.293 spare_malloc 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.293 spare_delay 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.293 [2024-11-20 11:28:45.351758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:02.293 [2024-11-20 11:28:45.351839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.293 [2024-11-20 11:28:45.351866] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:02.293 [2024-11-20 11:28:45.351879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.293 [2024-11-20 11:28:45.354104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.293 [2024-11-20 11:28:45.354149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:02.293 spare 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.293 [2024-11-20 11:28:45.363790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:02.293 [2024-11-20 11:28:45.365745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:02.293 [2024-11-20 11:28:45.365930] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:02.293 [2024-11-20 11:28:45.365960] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:02.293 [2024-11-20 11:28:45.366220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:02.293 [2024-11-20 11:28:45.366399] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:02.293 [2024-11-20 11:28:45.366414] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:02.293 [2024-11-20 11:28:45.366585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.293 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.553 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.553 "name": "raid_bdev1", 00:19:02.553 "uuid": "34798547-6972-4424-966a-933878bc0fb1", 00:19:02.553 "strip_size_kb": 0, 00:19:02.553 "state": "online", 00:19:02.553 "raid_level": "raid1", 00:19:02.553 "superblock": true, 00:19:02.553 "num_base_bdevs": 2, 00:19:02.553 "num_base_bdevs_discovered": 2, 00:19:02.553 "num_base_bdevs_operational": 2, 00:19:02.553 "base_bdevs_list": [ 00:19:02.553 { 00:19:02.553 "name": "BaseBdev1", 00:19:02.553 "uuid": "79531398-e147-5d2f-ad8e-32b99b6e26d5", 00:19:02.553 "is_configured": true, 00:19:02.553 "data_offset": 256, 00:19:02.553 "data_size": 7936 00:19:02.553 }, 00:19:02.553 { 00:19:02.553 "name": "BaseBdev2", 00:19:02.553 "uuid": "00672e11-09de-5306-8bcf-30d7cf302a23", 00:19:02.553 "is_configured": true, 00:19:02.553 "data_offset": 256, 00:19:02.553 "data_size": 7936 00:19:02.553 } 00:19:02.553 ] 00:19:02.553 }' 00:19:02.553 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.553 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.811 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:02.811 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:02.811 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.811 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.811 [2024-11-20 11:28:45.843602] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:02.811 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.812 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:02.812 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.812 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.812 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.812 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:02.812 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.072 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:03.072 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:03.072 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:03.072 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:03.072 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:03.072 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:03.072 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:03.072 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:03.072 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:03.072 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:03.072 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:03.072 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:03.072 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:03.072 11:28:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:03.072 [2024-11-20 11:28:46.126783] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:03.072 /dev/nbd0 00:19:03.072 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:03.072 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:03.072 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:03.072 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:03.072 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:03.072 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:03.072 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:03.072 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:03.072 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:03.072 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:03.072 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:03.072 1+0 records in 00:19:03.072 1+0 records out 00:19:03.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416004 s, 9.8 MB/s 00:19:03.072 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:03.072 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:03.072 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:03.332 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:03.332 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:03.332 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:03.332 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:03.332 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:03.332 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:03.332 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:03.904 7936+0 records in 00:19:03.904 7936+0 records out 00:19:03.904 32505856 bytes (33 MB, 31 MiB) copied, 0.757921 s, 42.9 MB/s 00:19:03.904 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:03.904 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:03.904 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:03.904 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:03.904 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:03.904 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:03.904 11:28:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:04.164 [2024-11-20 11:28:47.187823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.164 [2024-11-20 11:28:47.203884] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.164 "name": "raid_bdev1", 00:19:04.164 "uuid": "34798547-6972-4424-966a-933878bc0fb1", 00:19:04.164 "strip_size_kb": 0, 00:19:04.164 "state": "online", 00:19:04.164 "raid_level": "raid1", 00:19:04.164 "superblock": true, 00:19:04.164 "num_base_bdevs": 2, 00:19:04.164 "num_base_bdevs_discovered": 1, 00:19:04.164 "num_base_bdevs_operational": 1, 00:19:04.164 "base_bdevs_list": [ 00:19:04.164 { 00:19:04.164 "name": null, 00:19:04.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.164 "is_configured": false, 00:19:04.164 "data_offset": 0, 00:19:04.164 "data_size": 7936 00:19:04.164 }, 00:19:04.164 { 00:19:04.164 "name": "BaseBdev2", 00:19:04.164 "uuid": "00672e11-09de-5306-8bcf-30d7cf302a23", 00:19:04.164 "is_configured": true, 00:19:04.164 "data_offset": 256, 00:19:04.164 "data_size": 7936 00:19:04.164 } 00:19:04.164 ] 00:19:04.164 }' 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.164 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.734 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:04.734 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.734 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.734 [2024-11-20 11:28:47.631272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:04.734 [2024-11-20 11:28:47.649679] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:04.734 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.734 11:28:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:04.734 [2024-11-20 11:28:47.651506] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:05.673 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:05.674 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:05.674 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:05.674 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:05.674 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:05.674 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.674 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.674 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.674 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.674 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.674 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:05.674 "name": "raid_bdev1", 00:19:05.674 "uuid": "34798547-6972-4424-966a-933878bc0fb1", 00:19:05.674 "strip_size_kb": 0, 00:19:05.674 "state": "online", 00:19:05.674 "raid_level": "raid1", 00:19:05.674 "superblock": true, 00:19:05.674 "num_base_bdevs": 2, 00:19:05.674 "num_base_bdevs_discovered": 2, 00:19:05.674 "num_base_bdevs_operational": 2, 00:19:05.674 "process": { 00:19:05.674 "type": "rebuild", 00:19:05.674 "target": "spare", 00:19:05.674 "progress": { 00:19:05.674 "blocks": 2560, 00:19:05.674 "percent": 32 00:19:05.674 } 00:19:05.674 }, 00:19:05.674 "base_bdevs_list": [ 00:19:05.674 { 00:19:05.674 "name": "spare", 00:19:05.674 "uuid": "cc1db9e8-4438-5130-9094-e4c887096ff0", 00:19:05.674 "is_configured": true, 00:19:05.674 "data_offset": 256, 00:19:05.674 "data_size": 7936 00:19:05.674 }, 00:19:05.674 { 00:19:05.674 "name": "BaseBdev2", 00:19:05.674 "uuid": "00672e11-09de-5306-8bcf-30d7cf302a23", 00:19:05.674 "is_configured": true, 00:19:05.674 "data_offset": 256, 00:19:05.674 "data_size": 7936 00:19:05.674 } 00:19:05.674 ] 00:19:05.674 }' 00:19:05.674 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.674 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:05.674 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.934 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:05.934 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:05.934 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.934 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.934 [2024-11-20 11:28:48.794827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:05.934 [2024-11-20 11:28:48.857195] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:05.934 [2024-11-20 11:28:48.857283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.934 [2024-11-20 11:28:48.857299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:05.934 [2024-11-20 11:28:48.857308] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:05.934 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.934 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:05.934 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.934 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.934 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.934 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.934 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:05.934 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.934 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.934 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.934 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.934 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.934 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.934 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.934 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.934 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.934 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.934 "name": "raid_bdev1", 00:19:05.934 "uuid": "34798547-6972-4424-966a-933878bc0fb1", 00:19:05.934 "strip_size_kb": 0, 00:19:05.934 "state": "online", 00:19:05.934 "raid_level": "raid1", 00:19:05.934 "superblock": true, 00:19:05.934 "num_base_bdevs": 2, 00:19:05.934 "num_base_bdevs_discovered": 1, 00:19:05.934 "num_base_bdevs_operational": 1, 00:19:05.934 "base_bdevs_list": [ 00:19:05.934 { 00:19:05.934 "name": null, 00:19:05.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.934 "is_configured": false, 00:19:05.934 "data_offset": 0, 00:19:05.934 "data_size": 7936 00:19:05.934 }, 00:19:05.934 { 00:19:05.934 "name": "BaseBdev2", 00:19:05.934 "uuid": "00672e11-09de-5306-8bcf-30d7cf302a23", 00:19:05.934 "is_configured": true, 00:19:05.934 "data_offset": 256, 00:19:05.934 "data_size": 7936 00:19:05.934 } 00:19:05.934 ] 00:19:05.934 }' 00:19:05.934 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.934 11:28:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.505 11:28:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:06.505 11:28:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:06.505 11:28:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:06.505 11:28:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:06.505 11:28:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:06.505 11:28:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.505 11:28:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.505 11:28:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.505 11:28:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.505 11:28:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.505 11:28:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:06.505 "name": "raid_bdev1", 00:19:06.505 "uuid": "34798547-6972-4424-966a-933878bc0fb1", 00:19:06.505 "strip_size_kb": 0, 00:19:06.505 "state": "online", 00:19:06.505 "raid_level": "raid1", 00:19:06.505 "superblock": true, 00:19:06.505 "num_base_bdevs": 2, 00:19:06.505 "num_base_bdevs_discovered": 1, 00:19:06.505 "num_base_bdevs_operational": 1, 00:19:06.505 "base_bdevs_list": [ 00:19:06.505 { 00:19:06.505 "name": null, 00:19:06.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.505 "is_configured": false, 00:19:06.505 "data_offset": 0, 00:19:06.505 "data_size": 7936 00:19:06.505 }, 00:19:06.505 { 00:19:06.505 "name": "BaseBdev2", 00:19:06.505 "uuid": "00672e11-09de-5306-8bcf-30d7cf302a23", 00:19:06.505 "is_configured": true, 00:19:06.505 "data_offset": 256, 00:19:06.505 "data_size": 7936 00:19:06.505 } 00:19:06.505 ] 00:19:06.505 }' 00:19:06.505 11:28:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:06.505 11:28:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:06.505 11:28:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:06.505 11:28:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:06.505 11:28:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:06.505 11:28:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.505 11:28:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.505 [2024-11-20 11:28:49.468066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:06.505 [2024-11-20 11:28:49.484961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:06.505 11:28:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.505 11:28:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:06.505 [2024-11-20 11:28:49.486842] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:07.494 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:07.494 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.494 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:07.494 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:07.494 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.494 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.494 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.494 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.494 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.494 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.494 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:07.494 "name": "raid_bdev1", 00:19:07.494 "uuid": "34798547-6972-4424-966a-933878bc0fb1", 00:19:07.494 "strip_size_kb": 0, 00:19:07.494 "state": "online", 00:19:07.494 "raid_level": "raid1", 00:19:07.494 "superblock": true, 00:19:07.494 "num_base_bdevs": 2, 00:19:07.494 "num_base_bdevs_discovered": 2, 00:19:07.494 "num_base_bdevs_operational": 2, 00:19:07.494 "process": { 00:19:07.494 "type": "rebuild", 00:19:07.494 "target": "spare", 00:19:07.494 "progress": { 00:19:07.494 "blocks": 2560, 00:19:07.494 "percent": 32 00:19:07.494 } 00:19:07.494 }, 00:19:07.494 "base_bdevs_list": [ 00:19:07.494 { 00:19:07.494 "name": "spare", 00:19:07.494 "uuid": "cc1db9e8-4438-5130-9094-e4c887096ff0", 00:19:07.494 "is_configured": true, 00:19:07.494 "data_offset": 256, 00:19:07.494 "data_size": 7936 00:19:07.494 }, 00:19:07.494 { 00:19:07.494 "name": "BaseBdev2", 00:19:07.494 "uuid": "00672e11-09de-5306-8bcf-30d7cf302a23", 00:19:07.494 "is_configured": true, 00:19:07.494 "data_offset": 256, 00:19:07.494 "data_size": 7936 00:19:07.494 } 00:19:07.494 ] 00:19:07.494 }' 00:19:07.494 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:07.494 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:07.494 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:07.753 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:07.753 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:07.753 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:07.753 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:07.753 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:07.753 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:07.753 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:07.753 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=696 00:19:07.753 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:07.753 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:07.753 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.753 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:07.753 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:07.753 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.753 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.753 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.753 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.753 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.753 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.753 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:07.753 "name": "raid_bdev1", 00:19:07.753 "uuid": "34798547-6972-4424-966a-933878bc0fb1", 00:19:07.753 "strip_size_kb": 0, 00:19:07.753 "state": "online", 00:19:07.753 "raid_level": "raid1", 00:19:07.753 "superblock": true, 00:19:07.753 "num_base_bdevs": 2, 00:19:07.753 "num_base_bdevs_discovered": 2, 00:19:07.753 "num_base_bdevs_operational": 2, 00:19:07.753 "process": { 00:19:07.753 "type": "rebuild", 00:19:07.753 "target": "spare", 00:19:07.753 "progress": { 00:19:07.753 "blocks": 2816, 00:19:07.753 "percent": 35 00:19:07.753 } 00:19:07.753 }, 00:19:07.753 "base_bdevs_list": [ 00:19:07.753 { 00:19:07.753 "name": "spare", 00:19:07.753 "uuid": "cc1db9e8-4438-5130-9094-e4c887096ff0", 00:19:07.753 "is_configured": true, 00:19:07.753 "data_offset": 256, 00:19:07.753 "data_size": 7936 00:19:07.753 }, 00:19:07.753 { 00:19:07.753 "name": "BaseBdev2", 00:19:07.753 "uuid": "00672e11-09de-5306-8bcf-30d7cf302a23", 00:19:07.753 "is_configured": true, 00:19:07.753 "data_offset": 256, 00:19:07.753 "data_size": 7936 00:19:07.754 } 00:19:07.754 ] 00:19:07.754 }' 00:19:07.754 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:07.754 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:07.754 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:07.754 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:07.754 11:28:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:08.693 11:28:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:08.693 11:28:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:08.693 11:28:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:08.693 11:28:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:08.693 11:28:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:08.693 11:28:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:08.693 11:28:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.693 11:28:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.693 11:28:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.953 11:28:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.953 11:28:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.953 11:28:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:08.953 "name": "raid_bdev1", 00:19:08.953 "uuid": "34798547-6972-4424-966a-933878bc0fb1", 00:19:08.953 "strip_size_kb": 0, 00:19:08.953 "state": "online", 00:19:08.953 "raid_level": "raid1", 00:19:08.953 "superblock": true, 00:19:08.953 "num_base_bdevs": 2, 00:19:08.953 "num_base_bdevs_discovered": 2, 00:19:08.953 "num_base_bdevs_operational": 2, 00:19:08.953 "process": { 00:19:08.953 "type": "rebuild", 00:19:08.953 "target": "spare", 00:19:08.953 "progress": { 00:19:08.953 "blocks": 5888, 00:19:08.953 "percent": 74 00:19:08.953 } 00:19:08.953 }, 00:19:08.953 "base_bdevs_list": [ 00:19:08.953 { 00:19:08.953 "name": "spare", 00:19:08.953 "uuid": "cc1db9e8-4438-5130-9094-e4c887096ff0", 00:19:08.953 "is_configured": true, 00:19:08.953 "data_offset": 256, 00:19:08.953 "data_size": 7936 00:19:08.953 }, 00:19:08.953 { 00:19:08.953 "name": "BaseBdev2", 00:19:08.953 "uuid": "00672e11-09de-5306-8bcf-30d7cf302a23", 00:19:08.953 "is_configured": true, 00:19:08.953 "data_offset": 256, 00:19:08.953 "data_size": 7936 00:19:08.953 } 00:19:08.953 ] 00:19:08.953 }' 00:19:08.953 11:28:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:08.953 11:28:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:08.953 11:28:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:08.953 11:28:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:08.953 11:28:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:09.525 [2024-11-20 11:28:52.601580] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:09.525 [2024-11-20 11:28:52.601680] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:09.525 [2024-11-20 11:28:52.601820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:10.099 11:28:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:10.099 11:28:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:10.099 11:28:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:10.099 11:28:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:10.099 11:28:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:10.099 11:28:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:10.099 11:28:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.099 11:28:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.099 11:28:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.099 11:28:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.099 11:28:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.099 11:28:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:10.099 "name": "raid_bdev1", 00:19:10.099 "uuid": "34798547-6972-4424-966a-933878bc0fb1", 00:19:10.100 "strip_size_kb": 0, 00:19:10.100 "state": "online", 00:19:10.100 "raid_level": "raid1", 00:19:10.100 "superblock": true, 00:19:10.100 "num_base_bdevs": 2, 00:19:10.100 "num_base_bdevs_discovered": 2, 00:19:10.100 "num_base_bdevs_operational": 2, 00:19:10.100 "base_bdevs_list": [ 00:19:10.100 { 00:19:10.100 "name": "spare", 00:19:10.100 "uuid": "cc1db9e8-4438-5130-9094-e4c887096ff0", 00:19:10.100 "is_configured": true, 00:19:10.100 "data_offset": 256, 00:19:10.100 "data_size": 7936 00:19:10.100 }, 00:19:10.100 { 00:19:10.100 "name": "BaseBdev2", 00:19:10.100 "uuid": "00672e11-09de-5306-8bcf-30d7cf302a23", 00:19:10.100 "is_configured": true, 00:19:10.100 "data_offset": 256, 00:19:10.100 "data_size": 7936 00:19:10.100 } 00:19:10.100 ] 00:19:10.100 }' 00:19:10.100 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.100 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:10.100 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.100 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:10.100 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:19:10.100 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:10.100 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:10.100 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:10.100 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:10.100 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:10.100 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.100 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.100 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.100 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.100 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.100 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:10.100 "name": "raid_bdev1", 00:19:10.100 "uuid": "34798547-6972-4424-966a-933878bc0fb1", 00:19:10.100 "strip_size_kb": 0, 00:19:10.100 "state": "online", 00:19:10.100 "raid_level": "raid1", 00:19:10.100 "superblock": true, 00:19:10.100 "num_base_bdevs": 2, 00:19:10.100 "num_base_bdevs_discovered": 2, 00:19:10.100 "num_base_bdevs_operational": 2, 00:19:10.100 "base_bdevs_list": [ 00:19:10.100 { 00:19:10.100 "name": "spare", 00:19:10.100 "uuid": "cc1db9e8-4438-5130-9094-e4c887096ff0", 00:19:10.100 "is_configured": true, 00:19:10.100 "data_offset": 256, 00:19:10.100 "data_size": 7936 00:19:10.100 }, 00:19:10.100 { 00:19:10.100 "name": "BaseBdev2", 00:19:10.100 "uuid": "00672e11-09de-5306-8bcf-30d7cf302a23", 00:19:10.100 "is_configured": true, 00:19:10.100 "data_offset": 256, 00:19:10.100 "data_size": 7936 00:19:10.100 } 00:19:10.100 ] 00:19:10.100 }' 00:19:10.100 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.100 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:10.100 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.368 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:10.368 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:10.368 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:10.368 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:10.368 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:10.368 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:10.368 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:10.368 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.368 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.368 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.368 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.368 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.368 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.368 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.368 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.368 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.368 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.368 "name": "raid_bdev1", 00:19:10.368 "uuid": "34798547-6972-4424-966a-933878bc0fb1", 00:19:10.368 "strip_size_kb": 0, 00:19:10.368 "state": "online", 00:19:10.368 "raid_level": "raid1", 00:19:10.368 "superblock": true, 00:19:10.368 "num_base_bdevs": 2, 00:19:10.368 "num_base_bdevs_discovered": 2, 00:19:10.368 "num_base_bdevs_operational": 2, 00:19:10.368 "base_bdevs_list": [ 00:19:10.368 { 00:19:10.368 "name": "spare", 00:19:10.368 "uuid": "cc1db9e8-4438-5130-9094-e4c887096ff0", 00:19:10.368 "is_configured": true, 00:19:10.368 "data_offset": 256, 00:19:10.368 "data_size": 7936 00:19:10.368 }, 00:19:10.368 { 00:19:10.368 "name": "BaseBdev2", 00:19:10.368 "uuid": "00672e11-09de-5306-8bcf-30d7cf302a23", 00:19:10.368 "is_configured": true, 00:19:10.368 "data_offset": 256, 00:19:10.368 "data_size": 7936 00:19:10.368 } 00:19:10.368 ] 00:19:10.368 }' 00:19:10.368 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.368 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.639 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:10.639 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.639 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.639 [2024-11-20 11:28:53.701248] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:10.639 [2024-11-20 11:28:53.701394] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:10.639 [2024-11-20 11:28:53.701519] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:10.639 [2024-11-20 11:28:53.701594] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:10.639 [2024-11-20 11:28:53.701607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:10.639 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.639 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.639 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.639 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:19:10.639 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.639 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.912 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:10.912 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:10.912 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:10.912 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:10.912 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:10.912 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:10.912 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:10.912 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:10.912 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:10.912 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:10.912 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:10.912 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:10.912 11:28:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:10.912 /dev/nbd0 00:19:10.912 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:10.912 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:10.912 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:10.912 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:10.912 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:10.912 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:10.912 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:10.912 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:10.912 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:11.186 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:11.186 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:11.186 1+0 records in 00:19:11.186 1+0 records out 00:19:11.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612712 s, 6.7 MB/s 00:19:11.186 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:11.186 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:11.186 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:11.186 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:11.186 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:11.187 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:11.187 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:11.187 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:11.187 /dev/nbd1 00:19:11.187 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:11.187 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:11.187 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:11.187 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:11.187 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:11.187 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:11.187 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:11.187 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:11.187 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:11.187 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:11.187 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:11.449 1+0 records in 00:19:11.449 1+0 records out 00:19:11.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046772 s, 8.8 MB/s 00:19:11.449 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:11.449 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:11.449 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:11.449 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:11.449 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:11.449 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:11.449 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:11.449 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:11.449 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:11.449 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:11.449 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:11.449 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:11.449 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:11.449 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:11.449 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:11.708 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:11.708 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:11.708 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:11.708 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:11.708 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:11.708 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:11.708 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:11.708 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:11.708 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:11.708 11:28:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:11.967 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:11.967 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:11.967 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:11.967 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:11.967 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:11.967 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:11.967 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:11.967 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:11.967 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:11.967 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:11.967 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.967 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.967 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.967 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:11.967 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.967 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.967 [2024-11-20 11:28:55.073362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:11.967 [2024-11-20 11:28:55.073524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.967 [2024-11-20 11:28:55.073554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:11.967 [2024-11-20 11:28:55.073563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.967 [2024-11-20 11:28:55.075796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.967 [2024-11-20 11:28:55.075838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:11.967 [2024-11-20 11:28:55.075960] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:11.967 [2024-11-20 11:28:55.076019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:11.967 [2024-11-20 11:28:55.076203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:11.967 spare 00:19:11.967 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.967 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:11.967 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.967 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.227 [2024-11-20 11:28:55.176136] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:12.227 [2024-11-20 11:28:55.176186] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:12.227 [2024-11-20 11:28:55.176588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:12.227 [2024-11-20 11:28:55.176844] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:12.227 [2024-11-20 11:28:55.176862] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:12.227 [2024-11-20 11:28:55.177098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.227 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.227 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:12.227 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.227 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.227 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.227 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.227 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:12.227 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.227 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.227 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.227 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.227 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.227 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.227 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.227 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.227 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.227 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.227 "name": "raid_bdev1", 00:19:12.227 "uuid": "34798547-6972-4424-966a-933878bc0fb1", 00:19:12.227 "strip_size_kb": 0, 00:19:12.227 "state": "online", 00:19:12.227 "raid_level": "raid1", 00:19:12.227 "superblock": true, 00:19:12.227 "num_base_bdevs": 2, 00:19:12.227 "num_base_bdevs_discovered": 2, 00:19:12.227 "num_base_bdevs_operational": 2, 00:19:12.227 "base_bdevs_list": [ 00:19:12.227 { 00:19:12.227 "name": "spare", 00:19:12.227 "uuid": "cc1db9e8-4438-5130-9094-e4c887096ff0", 00:19:12.227 "is_configured": true, 00:19:12.227 "data_offset": 256, 00:19:12.227 "data_size": 7936 00:19:12.227 }, 00:19:12.227 { 00:19:12.227 "name": "BaseBdev2", 00:19:12.227 "uuid": "00672e11-09de-5306-8bcf-30d7cf302a23", 00:19:12.227 "is_configured": true, 00:19:12.227 "data_offset": 256, 00:19:12.227 "data_size": 7936 00:19:12.227 } 00:19:12.227 ] 00:19:12.227 }' 00:19:12.227 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.227 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.798 "name": "raid_bdev1", 00:19:12.798 "uuid": "34798547-6972-4424-966a-933878bc0fb1", 00:19:12.798 "strip_size_kb": 0, 00:19:12.798 "state": "online", 00:19:12.798 "raid_level": "raid1", 00:19:12.798 "superblock": true, 00:19:12.798 "num_base_bdevs": 2, 00:19:12.798 "num_base_bdevs_discovered": 2, 00:19:12.798 "num_base_bdevs_operational": 2, 00:19:12.798 "base_bdevs_list": [ 00:19:12.798 { 00:19:12.798 "name": "spare", 00:19:12.798 "uuid": "cc1db9e8-4438-5130-9094-e4c887096ff0", 00:19:12.798 "is_configured": true, 00:19:12.798 "data_offset": 256, 00:19:12.798 "data_size": 7936 00:19:12.798 }, 00:19:12.798 { 00:19:12.798 "name": "BaseBdev2", 00:19:12.798 "uuid": "00672e11-09de-5306-8bcf-30d7cf302a23", 00:19:12.798 "is_configured": true, 00:19:12.798 "data_offset": 256, 00:19:12.798 "data_size": 7936 00:19:12.798 } 00:19:12.798 ] 00:19:12.798 }' 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.798 [2024-11-20 11:28:55.856190] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.798 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.058 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.058 "name": "raid_bdev1", 00:19:13.058 "uuid": "34798547-6972-4424-966a-933878bc0fb1", 00:19:13.058 "strip_size_kb": 0, 00:19:13.058 "state": "online", 00:19:13.058 "raid_level": "raid1", 00:19:13.058 "superblock": true, 00:19:13.058 "num_base_bdevs": 2, 00:19:13.058 "num_base_bdevs_discovered": 1, 00:19:13.058 "num_base_bdevs_operational": 1, 00:19:13.058 "base_bdevs_list": [ 00:19:13.058 { 00:19:13.058 "name": null, 00:19:13.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.058 "is_configured": false, 00:19:13.058 "data_offset": 0, 00:19:13.058 "data_size": 7936 00:19:13.058 }, 00:19:13.058 { 00:19:13.058 "name": "BaseBdev2", 00:19:13.058 "uuid": "00672e11-09de-5306-8bcf-30d7cf302a23", 00:19:13.058 "is_configured": true, 00:19:13.058 "data_offset": 256, 00:19:13.058 "data_size": 7936 00:19:13.058 } 00:19:13.058 ] 00:19:13.058 }' 00:19:13.058 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.058 11:28:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.317 11:28:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:13.317 11:28:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.317 11:28:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.317 [2024-11-20 11:28:56.359481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:13.317 [2024-11-20 11:28:56.359779] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:13.317 [2024-11-20 11:28:56.359855] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:13.317 [2024-11-20 11:28:56.359930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:13.317 [2024-11-20 11:28:56.376836] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:13.317 11:28:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.317 11:28:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:13.317 [2024-11-20 11:28:56.378862] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:14.697 "name": "raid_bdev1", 00:19:14.697 "uuid": "34798547-6972-4424-966a-933878bc0fb1", 00:19:14.697 "strip_size_kb": 0, 00:19:14.697 "state": "online", 00:19:14.697 "raid_level": "raid1", 00:19:14.697 "superblock": true, 00:19:14.697 "num_base_bdevs": 2, 00:19:14.697 "num_base_bdevs_discovered": 2, 00:19:14.697 "num_base_bdevs_operational": 2, 00:19:14.697 "process": { 00:19:14.697 "type": "rebuild", 00:19:14.697 "target": "spare", 00:19:14.697 "progress": { 00:19:14.697 "blocks": 2560, 00:19:14.697 "percent": 32 00:19:14.697 } 00:19:14.697 }, 00:19:14.697 "base_bdevs_list": [ 00:19:14.697 { 00:19:14.697 "name": "spare", 00:19:14.697 "uuid": "cc1db9e8-4438-5130-9094-e4c887096ff0", 00:19:14.697 "is_configured": true, 00:19:14.697 "data_offset": 256, 00:19:14.697 "data_size": 7936 00:19:14.697 }, 00:19:14.697 { 00:19:14.697 "name": "BaseBdev2", 00:19:14.697 "uuid": "00672e11-09de-5306-8bcf-30d7cf302a23", 00:19:14.697 "is_configured": true, 00:19:14.697 "data_offset": 256, 00:19:14.697 "data_size": 7936 00:19:14.697 } 00:19:14.697 ] 00:19:14.697 }' 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.697 [2024-11-20 11:28:57.543736] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:14.697 [2024-11-20 11:28:57.585045] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:14.697 [2024-11-20 11:28:57.585148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.697 [2024-11-20 11:28:57.585167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:14.697 [2024-11-20 11:28:57.585179] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.697 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.697 "name": "raid_bdev1", 00:19:14.697 "uuid": "34798547-6972-4424-966a-933878bc0fb1", 00:19:14.697 "strip_size_kb": 0, 00:19:14.697 "state": "online", 00:19:14.697 "raid_level": "raid1", 00:19:14.697 "superblock": true, 00:19:14.697 "num_base_bdevs": 2, 00:19:14.697 "num_base_bdevs_discovered": 1, 00:19:14.697 "num_base_bdevs_operational": 1, 00:19:14.697 "base_bdevs_list": [ 00:19:14.697 { 00:19:14.697 "name": null, 00:19:14.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.697 "is_configured": false, 00:19:14.697 "data_offset": 0, 00:19:14.697 "data_size": 7936 00:19:14.697 }, 00:19:14.697 { 00:19:14.698 "name": "BaseBdev2", 00:19:14.698 "uuid": "00672e11-09de-5306-8bcf-30d7cf302a23", 00:19:14.698 "is_configured": true, 00:19:14.698 "data_offset": 256, 00:19:14.698 "data_size": 7936 00:19:14.698 } 00:19:14.698 ] 00:19:14.698 }' 00:19:14.698 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.698 11:28:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.315 11:28:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:15.315 11:28:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.315 11:28:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.315 [2024-11-20 11:28:58.159238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:15.315 [2024-11-20 11:28:58.159416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.315 [2024-11-20 11:28:58.159487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:15.315 [2024-11-20 11:28:58.159541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.315 [2024-11-20 11:28:58.160079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.315 [2024-11-20 11:28:58.160146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:15.315 [2024-11-20 11:28:58.160283] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:15.315 [2024-11-20 11:28:58.160331] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:15.315 [2024-11-20 11:28:58.160386] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:15.315 [2024-11-20 11:28:58.160436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:15.315 [2024-11-20 11:28:58.178814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:15.315 spare 00:19:15.315 11:28:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.315 [2024-11-20 11:28:58.180776] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:15.315 11:28:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:16.254 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:16.254 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:16.254 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:16.254 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:16.254 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:16.254 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.254 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.254 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.254 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.254 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.254 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:16.254 "name": "raid_bdev1", 00:19:16.254 "uuid": "34798547-6972-4424-966a-933878bc0fb1", 00:19:16.254 "strip_size_kb": 0, 00:19:16.254 "state": "online", 00:19:16.254 "raid_level": "raid1", 00:19:16.254 "superblock": true, 00:19:16.254 "num_base_bdevs": 2, 00:19:16.254 "num_base_bdevs_discovered": 2, 00:19:16.254 "num_base_bdevs_operational": 2, 00:19:16.254 "process": { 00:19:16.254 "type": "rebuild", 00:19:16.254 "target": "spare", 00:19:16.254 "progress": { 00:19:16.254 "blocks": 2560, 00:19:16.254 "percent": 32 00:19:16.254 } 00:19:16.254 }, 00:19:16.254 "base_bdevs_list": [ 00:19:16.254 { 00:19:16.254 "name": "spare", 00:19:16.254 "uuid": "cc1db9e8-4438-5130-9094-e4c887096ff0", 00:19:16.254 "is_configured": true, 00:19:16.254 "data_offset": 256, 00:19:16.254 "data_size": 7936 00:19:16.254 }, 00:19:16.254 { 00:19:16.254 "name": "BaseBdev2", 00:19:16.254 "uuid": "00672e11-09de-5306-8bcf-30d7cf302a23", 00:19:16.254 "is_configured": true, 00:19:16.254 "data_offset": 256, 00:19:16.254 "data_size": 7936 00:19:16.254 } 00:19:16.254 ] 00:19:16.254 }' 00:19:16.254 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:16.254 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:16.254 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.254 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:16.254 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:16.254 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.254 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.254 [2024-11-20 11:28:59.344490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:16.513 [2024-11-20 11:28:59.386769] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:16.513 [2024-11-20 11:28:59.386964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:16.513 [2024-11-20 11:28:59.387011] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:16.513 [2024-11-20 11:28:59.387036] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:16.513 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.513 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:16.513 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:16.513 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.513 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:16.513 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:16.513 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:16.513 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.513 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.513 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.513 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.513 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.513 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.513 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.513 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.513 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.513 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.513 "name": "raid_bdev1", 00:19:16.513 "uuid": "34798547-6972-4424-966a-933878bc0fb1", 00:19:16.513 "strip_size_kb": 0, 00:19:16.513 "state": "online", 00:19:16.513 "raid_level": "raid1", 00:19:16.513 "superblock": true, 00:19:16.513 "num_base_bdevs": 2, 00:19:16.513 "num_base_bdevs_discovered": 1, 00:19:16.513 "num_base_bdevs_operational": 1, 00:19:16.513 "base_bdevs_list": [ 00:19:16.513 { 00:19:16.513 "name": null, 00:19:16.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.513 "is_configured": false, 00:19:16.513 "data_offset": 0, 00:19:16.513 "data_size": 7936 00:19:16.513 }, 00:19:16.513 { 00:19:16.513 "name": "BaseBdev2", 00:19:16.513 "uuid": "00672e11-09de-5306-8bcf-30d7cf302a23", 00:19:16.513 "is_configured": true, 00:19:16.513 "data_offset": 256, 00:19:16.513 "data_size": 7936 00:19:16.513 } 00:19:16.513 ] 00:19:16.513 }' 00:19:16.513 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.513 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.774 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:16.774 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:16.774 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:16.774 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:16.774 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:16.774 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.774 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.774 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.774 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.034 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.034 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.034 "name": "raid_bdev1", 00:19:17.034 "uuid": "34798547-6972-4424-966a-933878bc0fb1", 00:19:17.034 "strip_size_kb": 0, 00:19:17.034 "state": "online", 00:19:17.034 "raid_level": "raid1", 00:19:17.034 "superblock": true, 00:19:17.034 "num_base_bdevs": 2, 00:19:17.034 "num_base_bdevs_discovered": 1, 00:19:17.034 "num_base_bdevs_operational": 1, 00:19:17.034 "base_bdevs_list": [ 00:19:17.034 { 00:19:17.034 "name": null, 00:19:17.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.034 "is_configured": false, 00:19:17.034 "data_offset": 0, 00:19:17.034 "data_size": 7936 00:19:17.034 }, 00:19:17.034 { 00:19:17.034 "name": "BaseBdev2", 00:19:17.034 "uuid": "00672e11-09de-5306-8bcf-30d7cf302a23", 00:19:17.034 "is_configured": true, 00:19:17.034 "data_offset": 256, 00:19:17.034 "data_size": 7936 00:19:17.034 } 00:19:17.034 ] 00:19:17.034 }' 00:19:17.034 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.034 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:17.034 11:28:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.034 11:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:17.034 11:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:17.034 11:29:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.034 11:29:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.034 11:29:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.034 11:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:17.034 11:29:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.034 11:29:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.034 [2024-11-20 11:29:00.033827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:17.034 [2024-11-20 11:29:00.033925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.034 [2024-11-20 11:29:00.033952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:17.034 [2024-11-20 11:29:00.033975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.034 [2024-11-20 11:29:00.034525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.034 [2024-11-20 11:29:00.034551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:17.034 [2024-11-20 11:29:00.034654] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:17.034 [2024-11-20 11:29:00.034670] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:17.034 [2024-11-20 11:29:00.034681] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:17.034 [2024-11-20 11:29:00.034693] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:17.034 BaseBdev1 00:19:17.034 11:29:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.034 11:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:17.994 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:17.994 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.994 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.994 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:17.994 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:17.994 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:17.994 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.994 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.994 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.994 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.994 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.994 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.994 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.994 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.994 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.994 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.994 "name": "raid_bdev1", 00:19:17.995 "uuid": "34798547-6972-4424-966a-933878bc0fb1", 00:19:17.995 "strip_size_kb": 0, 00:19:17.995 "state": "online", 00:19:17.995 "raid_level": "raid1", 00:19:17.995 "superblock": true, 00:19:17.995 "num_base_bdevs": 2, 00:19:17.995 "num_base_bdevs_discovered": 1, 00:19:17.995 "num_base_bdevs_operational": 1, 00:19:17.995 "base_bdevs_list": [ 00:19:17.995 { 00:19:17.995 "name": null, 00:19:17.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.995 "is_configured": false, 00:19:17.995 "data_offset": 0, 00:19:17.995 "data_size": 7936 00:19:17.995 }, 00:19:17.995 { 00:19:17.995 "name": "BaseBdev2", 00:19:17.995 "uuid": "00672e11-09de-5306-8bcf-30d7cf302a23", 00:19:17.995 "is_configured": true, 00:19:17.995 "data_offset": 256, 00:19:17.995 "data_size": 7936 00:19:17.995 } 00:19:17.995 ] 00:19:17.995 }' 00:19:17.995 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.995 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.562 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:18.562 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.562 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:18.562 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:18.562 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.562 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.562 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.562 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.562 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.562 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.562 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.562 "name": "raid_bdev1", 00:19:18.562 "uuid": "34798547-6972-4424-966a-933878bc0fb1", 00:19:18.562 "strip_size_kb": 0, 00:19:18.562 "state": "online", 00:19:18.562 "raid_level": "raid1", 00:19:18.562 "superblock": true, 00:19:18.562 "num_base_bdevs": 2, 00:19:18.562 "num_base_bdevs_discovered": 1, 00:19:18.562 "num_base_bdevs_operational": 1, 00:19:18.562 "base_bdevs_list": [ 00:19:18.562 { 00:19:18.562 "name": null, 00:19:18.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.562 "is_configured": false, 00:19:18.562 "data_offset": 0, 00:19:18.562 "data_size": 7936 00:19:18.562 }, 00:19:18.562 { 00:19:18.562 "name": "BaseBdev2", 00:19:18.562 "uuid": "00672e11-09de-5306-8bcf-30d7cf302a23", 00:19:18.562 "is_configured": true, 00:19:18.562 "data_offset": 256, 00:19:18.562 "data_size": 7936 00:19:18.562 } 00:19:18.562 ] 00:19:18.562 }' 00:19:18.562 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.562 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:18.562 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.562 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:18.562 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:18.562 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:19:18.562 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:18.562 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:18.821 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:18.821 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:18.821 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:18.821 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:18.821 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.821 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.821 [2024-11-20 11:29:01.687157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:18.822 [2024-11-20 11:29:01.687475] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:18.822 [2024-11-20 11:29:01.687549] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:18.822 request: 00:19:18.822 { 00:19:18.822 "base_bdev": "BaseBdev1", 00:19:18.822 "raid_bdev": "raid_bdev1", 00:19:18.822 "method": "bdev_raid_add_base_bdev", 00:19:18.822 "req_id": 1 00:19:18.822 } 00:19:18.822 Got JSON-RPC error response 00:19:18.822 response: 00:19:18.822 { 00:19:18.822 "code": -22, 00:19:18.822 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:18.822 } 00:19:18.822 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:18.822 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:19:18.822 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:18.822 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:18.822 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:18.822 11:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:19.757 11:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:19.757 11:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.757 11:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.757 11:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:19.757 11:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:19.757 11:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:19.757 11:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.757 11:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.757 11:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.757 11:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.757 11:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.757 11:29:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.757 11:29:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.757 11:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.757 11:29:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.757 11:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.757 "name": "raid_bdev1", 00:19:19.757 "uuid": "34798547-6972-4424-966a-933878bc0fb1", 00:19:19.757 "strip_size_kb": 0, 00:19:19.757 "state": "online", 00:19:19.757 "raid_level": "raid1", 00:19:19.757 "superblock": true, 00:19:19.757 "num_base_bdevs": 2, 00:19:19.757 "num_base_bdevs_discovered": 1, 00:19:19.757 "num_base_bdevs_operational": 1, 00:19:19.757 "base_bdevs_list": [ 00:19:19.757 { 00:19:19.757 "name": null, 00:19:19.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.757 "is_configured": false, 00:19:19.757 "data_offset": 0, 00:19:19.757 "data_size": 7936 00:19:19.757 }, 00:19:19.757 { 00:19:19.757 "name": "BaseBdev2", 00:19:19.757 "uuid": "00672e11-09de-5306-8bcf-30d7cf302a23", 00:19:19.757 "is_configured": true, 00:19:19.757 "data_offset": 256, 00:19:19.757 "data_size": 7936 00:19:19.757 } 00:19:19.757 ] 00:19:19.757 }' 00:19:19.757 11:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.757 11:29:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.327 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:20.327 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.327 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:20.327 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:20.327 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.327 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.327 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.327 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.327 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.327 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.327 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.327 "name": "raid_bdev1", 00:19:20.327 "uuid": "34798547-6972-4424-966a-933878bc0fb1", 00:19:20.327 "strip_size_kb": 0, 00:19:20.327 "state": "online", 00:19:20.327 "raid_level": "raid1", 00:19:20.327 "superblock": true, 00:19:20.327 "num_base_bdevs": 2, 00:19:20.327 "num_base_bdevs_discovered": 1, 00:19:20.327 "num_base_bdevs_operational": 1, 00:19:20.327 "base_bdevs_list": [ 00:19:20.327 { 00:19:20.327 "name": null, 00:19:20.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.327 "is_configured": false, 00:19:20.327 "data_offset": 0, 00:19:20.327 "data_size": 7936 00:19:20.327 }, 00:19:20.327 { 00:19:20.327 "name": "BaseBdev2", 00:19:20.327 "uuid": "00672e11-09de-5306-8bcf-30d7cf302a23", 00:19:20.327 "is_configured": true, 00:19:20.327 "data_offset": 256, 00:19:20.327 "data_size": 7936 00:19:20.327 } 00:19:20.327 ] 00:19:20.327 }' 00:19:20.327 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.327 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:20.327 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.327 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:20.327 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86732 00:19:20.327 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86732 ']' 00:19:20.327 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86732 00:19:20.327 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:19:20.327 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.327 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86732 00:19:20.327 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:20.327 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:20.328 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86732' 00:19:20.328 killing process with pid 86732 00:19:20.328 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86732 00:19:20.328 Received shutdown signal, test time was about 60.000000 seconds 00:19:20.328 00:19:20.328 Latency(us) 00:19:20.328 [2024-11-20T11:29:03.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.328 [2024-11-20T11:29:03.444Z] =================================================================================================================== 00:19:20.328 [2024-11-20T11:29:03.444Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:20.328 11:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86732 00:19:20.328 [2024-11-20 11:29:03.352717] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:20.328 [2024-11-20 11:29:03.352899] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:20.328 [2024-11-20 11:29:03.352987] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:20.328 [2024-11-20 11:29:03.353038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:20.587 [2024-11-20 11:29:03.689706] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:21.966 11:29:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:19:21.966 00:19:21.966 real 0m20.800s 00:19:21.966 user 0m27.192s 00:19:21.966 sys 0m2.901s 00:19:21.966 11:29:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.966 11:29:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.966 ************************************ 00:19:21.966 END TEST raid_rebuild_test_sb_4k 00:19:21.966 ************************************ 00:19:21.966 11:29:05 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:19:21.966 11:29:05 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:19:21.966 11:29:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:21.966 11:29:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.966 11:29:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:21.966 ************************************ 00:19:21.966 START TEST raid_state_function_test_sb_md_separate 00:19:21.966 ************************************ 00:19:21.966 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:21.966 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:21.966 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:21.966 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:21.966 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:21.966 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:21.966 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:21.966 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:21.966 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:21.966 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:21.966 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:21.966 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:21.966 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:21.966 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:21.966 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:21.966 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:21.966 Process raid pid: 87432 00:19:21.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.966 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:21.966 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:21.966 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:21.967 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:21.967 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:21.967 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:21.967 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:21.967 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87432 00:19:21.967 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87432' 00:19:21.967 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87432 00:19:21.967 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:21.967 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87432 ']' 00:19:21.967 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.967 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.967 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.967 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.967 11:29:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:22.227 [2024-11-20 11:29:05.148396] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:19:22.227 [2024-11-20 11:29:05.148777] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.227 [2024-11-20 11:29:05.335680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.486 [2024-11-20 11:29:05.468426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.795 [2024-11-20 11:29:05.700000] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:22.795 [2024-11-20 11:29:05.700159] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:23.057 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.057 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:23.057 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:23.057 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.057 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.057 [2024-11-20 11:29:06.075029] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:23.057 [2024-11-20 11:29:06.075185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:23.057 [2024-11-20 11:29:06.075226] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:23.057 [2024-11-20 11:29:06.075256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:23.057 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.057 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:23.057 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:23.057 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:23.057 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:23.057 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:23.057 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:23.057 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.057 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.057 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.057 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.057 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.057 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.057 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.057 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.057 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.057 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.057 "name": "Existed_Raid", 00:19:23.057 "uuid": "6c645378-cac4-4fd6-9f2a-48d689366b92", 00:19:23.057 "strip_size_kb": 0, 00:19:23.057 "state": "configuring", 00:19:23.057 "raid_level": "raid1", 00:19:23.057 "superblock": true, 00:19:23.057 "num_base_bdevs": 2, 00:19:23.057 "num_base_bdevs_discovered": 0, 00:19:23.057 "num_base_bdevs_operational": 2, 00:19:23.057 "base_bdevs_list": [ 00:19:23.057 { 00:19:23.057 "name": "BaseBdev1", 00:19:23.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.057 "is_configured": false, 00:19:23.057 "data_offset": 0, 00:19:23.057 "data_size": 0 00:19:23.057 }, 00:19:23.057 { 00:19:23.057 "name": "BaseBdev2", 00:19:23.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.057 "is_configured": false, 00:19:23.057 "data_offset": 0, 00:19:23.057 "data_size": 0 00:19:23.057 } 00:19:23.057 ] 00:19:23.057 }' 00:19:23.057 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.057 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.626 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:23.626 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.626 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.626 [2024-11-20 11:29:06.542159] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:23.626 [2024-11-20 11:29:06.542201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:23.626 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.626 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:23.626 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.626 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.626 [2024-11-20 11:29:06.554128] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:23.626 [2024-11-20 11:29:06.554179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:23.626 [2024-11-20 11:29:06.554189] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:23.626 [2024-11-20 11:29:06.554202] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:23.626 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.626 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:19:23.626 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.626 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.626 [2024-11-20 11:29:06.604605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:23.626 BaseBdev1 00:19:23.626 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.626 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:23.626 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:23.626 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:23.626 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:19:23.626 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:23.626 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:23.626 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:23.626 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.626 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.626 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.626 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:23.626 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.626 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.626 [ 00:19:23.626 { 00:19:23.626 "name": "BaseBdev1", 00:19:23.626 "aliases": [ 00:19:23.626 "5d68470e-130a-484d-8319-7b8ff642adf2" 00:19:23.626 ], 00:19:23.626 "product_name": "Malloc disk", 00:19:23.626 "block_size": 4096, 00:19:23.626 "num_blocks": 8192, 00:19:23.626 "uuid": "5d68470e-130a-484d-8319-7b8ff642adf2", 00:19:23.626 "md_size": 32, 00:19:23.626 "md_interleave": false, 00:19:23.626 "dif_type": 0, 00:19:23.626 "assigned_rate_limits": { 00:19:23.626 "rw_ios_per_sec": 0, 00:19:23.626 "rw_mbytes_per_sec": 0, 00:19:23.626 "r_mbytes_per_sec": 0, 00:19:23.626 "w_mbytes_per_sec": 0 00:19:23.626 }, 00:19:23.626 "claimed": true, 00:19:23.626 "claim_type": "exclusive_write", 00:19:23.626 "zoned": false, 00:19:23.626 "supported_io_types": { 00:19:23.626 "read": true, 00:19:23.626 "write": true, 00:19:23.626 "unmap": true, 00:19:23.626 "flush": true, 00:19:23.626 "reset": true, 00:19:23.626 "nvme_admin": false, 00:19:23.626 "nvme_io": false, 00:19:23.626 "nvme_io_md": false, 00:19:23.626 "write_zeroes": true, 00:19:23.626 "zcopy": true, 00:19:23.626 "get_zone_info": false, 00:19:23.626 "zone_management": false, 00:19:23.626 "zone_append": false, 00:19:23.626 "compare": false, 00:19:23.626 "compare_and_write": false, 00:19:23.627 "abort": true, 00:19:23.627 "seek_hole": false, 00:19:23.627 "seek_data": false, 00:19:23.627 "copy": true, 00:19:23.627 "nvme_iov_md": false 00:19:23.627 }, 00:19:23.627 "memory_domains": [ 00:19:23.627 { 00:19:23.627 "dma_device_id": "system", 00:19:23.627 "dma_device_type": 1 00:19:23.627 }, 00:19:23.627 { 00:19:23.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:23.627 "dma_device_type": 2 00:19:23.627 } 00:19:23.627 ], 00:19:23.627 "driver_specific": {} 00:19:23.627 } 00:19:23.627 ] 00:19:23.627 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.627 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:19:23.627 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:23.627 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:23.627 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:23.627 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:23.627 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:23.627 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:23.627 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.627 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.627 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.627 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.627 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.627 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.627 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.627 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:23.627 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.627 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.627 "name": "Existed_Raid", 00:19:23.627 "uuid": "c8f0048f-11d3-48d2-9924-ec692c0299b1", 00:19:23.627 "strip_size_kb": 0, 00:19:23.627 "state": "configuring", 00:19:23.627 "raid_level": "raid1", 00:19:23.627 "superblock": true, 00:19:23.627 "num_base_bdevs": 2, 00:19:23.627 "num_base_bdevs_discovered": 1, 00:19:23.627 "num_base_bdevs_operational": 2, 00:19:23.627 "base_bdevs_list": [ 00:19:23.627 { 00:19:23.627 "name": "BaseBdev1", 00:19:23.627 "uuid": "5d68470e-130a-484d-8319-7b8ff642adf2", 00:19:23.627 "is_configured": true, 00:19:23.627 "data_offset": 256, 00:19:23.627 "data_size": 7936 00:19:23.627 }, 00:19:23.627 { 00:19:23.627 "name": "BaseBdev2", 00:19:23.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.627 "is_configured": false, 00:19:23.627 "data_offset": 0, 00:19:23.627 "data_size": 0 00:19:23.627 } 00:19:23.627 ] 00:19:23.627 }' 00:19:23.627 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.627 11:29:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:24.197 [2024-11-20 11:29:07.147804] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:24.197 [2024-11-20 11:29:07.147969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:24.197 [2024-11-20 11:29:07.159802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:24.197 [2024-11-20 11:29:07.161631] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:24.197 [2024-11-20 11:29:07.161676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.197 "name": "Existed_Raid", 00:19:24.197 "uuid": "a437cc53-4f7c-4c9d-bdc3-fb27fda91ea6", 00:19:24.197 "strip_size_kb": 0, 00:19:24.197 "state": "configuring", 00:19:24.197 "raid_level": "raid1", 00:19:24.197 "superblock": true, 00:19:24.197 "num_base_bdevs": 2, 00:19:24.197 "num_base_bdevs_discovered": 1, 00:19:24.197 "num_base_bdevs_operational": 2, 00:19:24.197 "base_bdevs_list": [ 00:19:24.197 { 00:19:24.197 "name": "BaseBdev1", 00:19:24.197 "uuid": "5d68470e-130a-484d-8319-7b8ff642adf2", 00:19:24.197 "is_configured": true, 00:19:24.197 "data_offset": 256, 00:19:24.197 "data_size": 7936 00:19:24.197 }, 00:19:24.197 { 00:19:24.197 "name": "BaseBdev2", 00:19:24.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.197 "is_configured": false, 00:19:24.197 "data_offset": 0, 00:19:24.197 "data_size": 0 00:19:24.197 } 00:19:24.197 ] 00:19:24.197 }' 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.197 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:24.767 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:24.768 [2024-11-20 11:29:07.658644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:24.768 [2024-11-20 11:29:07.658966] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:24.768 [2024-11-20 11:29:07.659016] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:24.768 [2024-11-20 11:29:07.659135] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:24.768 [2024-11-20 11:29:07.659278] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:24.768 [2024-11-20 11:29:07.659316] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:24.768 [2024-11-20 11:29:07.659473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.768 BaseBdev2 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:24.768 [ 00:19:24.768 { 00:19:24.768 "name": "BaseBdev2", 00:19:24.768 "aliases": [ 00:19:24.768 "ec8c2110-a333-4985-aa7a-ca63d14496cc" 00:19:24.768 ], 00:19:24.768 "product_name": "Malloc disk", 00:19:24.768 "block_size": 4096, 00:19:24.768 "num_blocks": 8192, 00:19:24.768 "uuid": "ec8c2110-a333-4985-aa7a-ca63d14496cc", 00:19:24.768 "md_size": 32, 00:19:24.768 "md_interleave": false, 00:19:24.768 "dif_type": 0, 00:19:24.768 "assigned_rate_limits": { 00:19:24.768 "rw_ios_per_sec": 0, 00:19:24.768 "rw_mbytes_per_sec": 0, 00:19:24.768 "r_mbytes_per_sec": 0, 00:19:24.768 "w_mbytes_per_sec": 0 00:19:24.768 }, 00:19:24.768 "claimed": true, 00:19:24.768 "claim_type": "exclusive_write", 00:19:24.768 "zoned": false, 00:19:24.768 "supported_io_types": { 00:19:24.768 "read": true, 00:19:24.768 "write": true, 00:19:24.768 "unmap": true, 00:19:24.768 "flush": true, 00:19:24.768 "reset": true, 00:19:24.768 "nvme_admin": false, 00:19:24.768 "nvme_io": false, 00:19:24.768 "nvme_io_md": false, 00:19:24.768 "write_zeroes": true, 00:19:24.768 "zcopy": true, 00:19:24.768 "get_zone_info": false, 00:19:24.768 "zone_management": false, 00:19:24.768 "zone_append": false, 00:19:24.768 "compare": false, 00:19:24.768 "compare_and_write": false, 00:19:24.768 "abort": true, 00:19:24.768 "seek_hole": false, 00:19:24.768 "seek_data": false, 00:19:24.768 "copy": true, 00:19:24.768 "nvme_iov_md": false 00:19:24.768 }, 00:19:24.768 "memory_domains": [ 00:19:24.768 { 00:19:24.768 "dma_device_id": "system", 00:19:24.768 "dma_device_type": 1 00:19:24.768 }, 00:19:24.768 { 00:19:24.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.768 "dma_device_type": 2 00:19:24.768 } 00:19:24.768 ], 00:19:24.768 "driver_specific": {} 00:19:24.768 } 00:19:24.768 ] 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.768 "name": "Existed_Raid", 00:19:24.768 "uuid": "a437cc53-4f7c-4c9d-bdc3-fb27fda91ea6", 00:19:24.768 "strip_size_kb": 0, 00:19:24.768 "state": "online", 00:19:24.768 "raid_level": "raid1", 00:19:24.768 "superblock": true, 00:19:24.768 "num_base_bdevs": 2, 00:19:24.768 "num_base_bdevs_discovered": 2, 00:19:24.768 "num_base_bdevs_operational": 2, 00:19:24.768 "base_bdevs_list": [ 00:19:24.768 { 00:19:24.768 "name": "BaseBdev1", 00:19:24.768 "uuid": "5d68470e-130a-484d-8319-7b8ff642adf2", 00:19:24.768 "is_configured": true, 00:19:24.768 "data_offset": 256, 00:19:24.768 "data_size": 7936 00:19:24.768 }, 00:19:24.768 { 00:19:24.768 "name": "BaseBdev2", 00:19:24.768 "uuid": "ec8c2110-a333-4985-aa7a-ca63d14496cc", 00:19:24.768 "is_configured": true, 00:19:24.768 "data_offset": 256, 00:19:24.768 "data_size": 7936 00:19:24.768 } 00:19:24.768 ] 00:19:24.768 }' 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.768 11:29:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:25.337 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:25.338 [2024-11-20 11:29:08.186192] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:25.338 "name": "Existed_Raid", 00:19:25.338 "aliases": [ 00:19:25.338 "a437cc53-4f7c-4c9d-bdc3-fb27fda91ea6" 00:19:25.338 ], 00:19:25.338 "product_name": "Raid Volume", 00:19:25.338 "block_size": 4096, 00:19:25.338 "num_blocks": 7936, 00:19:25.338 "uuid": "a437cc53-4f7c-4c9d-bdc3-fb27fda91ea6", 00:19:25.338 "md_size": 32, 00:19:25.338 "md_interleave": false, 00:19:25.338 "dif_type": 0, 00:19:25.338 "assigned_rate_limits": { 00:19:25.338 "rw_ios_per_sec": 0, 00:19:25.338 "rw_mbytes_per_sec": 0, 00:19:25.338 "r_mbytes_per_sec": 0, 00:19:25.338 "w_mbytes_per_sec": 0 00:19:25.338 }, 00:19:25.338 "claimed": false, 00:19:25.338 "zoned": false, 00:19:25.338 "supported_io_types": { 00:19:25.338 "read": true, 00:19:25.338 "write": true, 00:19:25.338 "unmap": false, 00:19:25.338 "flush": false, 00:19:25.338 "reset": true, 00:19:25.338 "nvme_admin": false, 00:19:25.338 "nvme_io": false, 00:19:25.338 "nvme_io_md": false, 00:19:25.338 "write_zeroes": true, 00:19:25.338 "zcopy": false, 00:19:25.338 "get_zone_info": false, 00:19:25.338 "zone_management": false, 00:19:25.338 "zone_append": false, 00:19:25.338 "compare": false, 00:19:25.338 "compare_and_write": false, 00:19:25.338 "abort": false, 00:19:25.338 "seek_hole": false, 00:19:25.338 "seek_data": false, 00:19:25.338 "copy": false, 00:19:25.338 "nvme_iov_md": false 00:19:25.338 }, 00:19:25.338 "memory_domains": [ 00:19:25.338 { 00:19:25.338 "dma_device_id": "system", 00:19:25.338 "dma_device_type": 1 00:19:25.338 }, 00:19:25.338 { 00:19:25.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.338 "dma_device_type": 2 00:19:25.338 }, 00:19:25.338 { 00:19:25.338 "dma_device_id": "system", 00:19:25.338 "dma_device_type": 1 00:19:25.338 }, 00:19:25.338 { 00:19:25.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.338 "dma_device_type": 2 00:19:25.338 } 00:19:25.338 ], 00:19:25.338 "driver_specific": { 00:19:25.338 "raid": { 00:19:25.338 "uuid": "a437cc53-4f7c-4c9d-bdc3-fb27fda91ea6", 00:19:25.338 "strip_size_kb": 0, 00:19:25.338 "state": "online", 00:19:25.338 "raid_level": "raid1", 00:19:25.338 "superblock": true, 00:19:25.338 "num_base_bdevs": 2, 00:19:25.338 "num_base_bdevs_discovered": 2, 00:19:25.338 "num_base_bdevs_operational": 2, 00:19:25.338 "base_bdevs_list": [ 00:19:25.338 { 00:19:25.338 "name": "BaseBdev1", 00:19:25.338 "uuid": "5d68470e-130a-484d-8319-7b8ff642adf2", 00:19:25.338 "is_configured": true, 00:19:25.338 "data_offset": 256, 00:19:25.338 "data_size": 7936 00:19:25.338 }, 00:19:25.338 { 00:19:25.338 "name": "BaseBdev2", 00:19:25.338 "uuid": "ec8c2110-a333-4985-aa7a-ca63d14496cc", 00:19:25.338 "is_configured": true, 00:19:25.338 "data_offset": 256, 00:19:25.338 "data_size": 7936 00:19:25.338 } 00:19:25.338 ] 00:19:25.338 } 00:19:25.338 } 00:19:25.338 }' 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:25.338 BaseBdev2' 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.338 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:25.338 [2024-11-20 11:29:08.425522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:25.597 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.597 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:25.597 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:25.597 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:25.597 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:25.597 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:25.597 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:25.597 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:25.597 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.597 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.597 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.597 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:25.597 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.597 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.597 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.597 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.598 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.598 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.598 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:25.598 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.598 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.598 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.598 "name": "Existed_Raid", 00:19:25.598 "uuid": "a437cc53-4f7c-4c9d-bdc3-fb27fda91ea6", 00:19:25.598 "strip_size_kb": 0, 00:19:25.598 "state": "online", 00:19:25.598 "raid_level": "raid1", 00:19:25.598 "superblock": true, 00:19:25.598 "num_base_bdevs": 2, 00:19:25.598 "num_base_bdevs_discovered": 1, 00:19:25.598 "num_base_bdevs_operational": 1, 00:19:25.598 "base_bdevs_list": [ 00:19:25.598 { 00:19:25.598 "name": null, 00:19:25.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.598 "is_configured": false, 00:19:25.598 "data_offset": 0, 00:19:25.598 "data_size": 7936 00:19:25.598 }, 00:19:25.598 { 00:19:25.598 "name": "BaseBdev2", 00:19:25.598 "uuid": "ec8c2110-a333-4985-aa7a-ca63d14496cc", 00:19:25.598 "is_configured": true, 00:19:25.598 "data_offset": 256, 00:19:25.598 "data_size": 7936 00:19:25.598 } 00:19:25.598 ] 00:19:25.598 }' 00:19:25.598 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.598 11:29:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:26.168 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:26.168 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:26.168 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.168 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.168 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:26.168 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:26.168 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.168 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:26.168 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:26.168 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:26.168 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.168 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:26.168 [2024-11-20 11:29:09.086953] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:26.168 [2024-11-20 11:29:09.087079] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:26.168 [2024-11-20 11:29:09.212728] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:26.168 [2024-11-20 11:29:09.212875] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:26.168 [2024-11-20 11:29:09.212929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:26.168 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.168 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:26.168 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:26.168 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.168 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.168 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:26.168 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:26.168 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.168 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:26.168 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:26.168 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:26.168 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87432 00:19:26.169 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87432 ']' 00:19:26.169 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87432 00:19:26.169 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:26.169 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.428 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87432 00:19:26.428 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:26.428 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:26.428 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87432' 00:19:26.428 killing process with pid 87432 00:19:26.428 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87432 00:19:26.428 [2024-11-20 11:29:09.317822] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:26.428 11:29:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87432 00:19:26.428 [2024-11-20 11:29:09.338238] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:27.810 11:29:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:19:27.810 00:19:27.810 real 0m5.607s 00:19:27.810 user 0m8.043s 00:19:27.810 sys 0m0.952s 00:19:27.810 11:29:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:27.810 11:29:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:27.810 ************************************ 00:19:27.810 END TEST raid_state_function_test_sb_md_separate 00:19:27.810 ************************************ 00:19:27.810 11:29:10 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:19:27.810 11:29:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:27.810 11:29:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:27.810 11:29:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:27.810 ************************************ 00:19:27.810 START TEST raid_superblock_test_md_separate 00:19:27.810 ************************************ 00:19:27.810 11:29:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:27.810 11:29:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:27.810 11:29:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:27.810 11:29:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:27.810 11:29:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:27.810 11:29:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:27.810 11:29:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:27.810 11:29:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:27.810 11:29:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:27.810 11:29:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:27.810 11:29:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:27.810 11:29:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:27.810 11:29:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:27.810 11:29:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:27.810 11:29:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:27.810 11:29:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:27.810 11:29:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87685 00:19:27.810 11:29:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87685 00:19:27.810 11:29:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:27.810 11:29:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87685 ']' 00:19:27.810 11:29:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.810 11:29:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.810 11:29:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.810 11:29:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.810 11:29:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:27.810 [2024-11-20 11:29:10.810395] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:19:27.810 [2024-11-20 11:29:10.810552] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87685 ] 00:19:28.070 [2024-11-20 11:29:10.983976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.070 [2024-11-20 11:29:11.119963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.329 [2024-11-20 11:29:11.340319] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.329 [2024-11-20 11:29:11.340393] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.898 11:29:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.898 11:29:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:28.898 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:28.898 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:28.898 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:28.898 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:28.898 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:28.898 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:28.898 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:28.898 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:28.898 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:19:28.898 11:29:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.898 11:29:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.898 malloc1 00:19:28.898 11:29:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.898 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:28.898 11:29:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.898 11:29:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.898 [2024-11-20 11:29:11.770142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:28.899 [2024-11-20 11:29:11.770326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.899 [2024-11-20 11:29:11.770374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:28.899 [2024-11-20 11:29:11.770418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.899 [2024-11-20 11:29:11.772792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.899 [2024-11-20 11:29:11.772895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:28.899 pt1 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.899 malloc2 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.899 [2024-11-20 11:29:11.830417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:28.899 [2024-11-20 11:29:11.830507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.899 [2024-11-20 11:29:11.830529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:28.899 [2024-11-20 11:29:11.830540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.899 [2024-11-20 11:29:11.832722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.899 [2024-11-20 11:29:11.832768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:28.899 pt2 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.899 [2024-11-20 11:29:11.842421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:28.899 [2024-11-20 11:29:11.844812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:28.899 [2024-11-20 11:29:11.845021] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:28.899 [2024-11-20 11:29:11.845038] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:28.899 [2024-11-20 11:29:11.845135] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:28.899 [2024-11-20 11:29:11.845275] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:28.899 [2024-11-20 11:29:11.845305] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:28.899 [2024-11-20 11:29:11.845442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.899 "name": "raid_bdev1", 00:19:28.899 "uuid": "93844104-b1d8-4f1e-b9ab-8d3fca15ac33", 00:19:28.899 "strip_size_kb": 0, 00:19:28.899 "state": "online", 00:19:28.899 "raid_level": "raid1", 00:19:28.899 "superblock": true, 00:19:28.899 "num_base_bdevs": 2, 00:19:28.899 "num_base_bdevs_discovered": 2, 00:19:28.899 "num_base_bdevs_operational": 2, 00:19:28.899 "base_bdevs_list": [ 00:19:28.899 { 00:19:28.899 "name": "pt1", 00:19:28.899 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:28.899 "is_configured": true, 00:19:28.899 "data_offset": 256, 00:19:28.899 "data_size": 7936 00:19:28.899 }, 00:19:28.899 { 00:19:28.899 "name": "pt2", 00:19:28.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:28.899 "is_configured": true, 00:19:28.899 "data_offset": 256, 00:19:28.899 "data_size": 7936 00:19:28.899 } 00:19:28.899 ] 00:19:28.899 }' 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.899 11:29:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.468 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:29.468 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:29.468 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:29.468 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:29.468 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:29.468 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:29.468 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:29.468 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:29.468 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.468 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.468 [2024-11-20 11:29:12.329950] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:29.468 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.468 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:29.468 "name": "raid_bdev1", 00:19:29.468 "aliases": [ 00:19:29.468 "93844104-b1d8-4f1e-b9ab-8d3fca15ac33" 00:19:29.468 ], 00:19:29.468 "product_name": "Raid Volume", 00:19:29.468 "block_size": 4096, 00:19:29.468 "num_blocks": 7936, 00:19:29.468 "uuid": "93844104-b1d8-4f1e-b9ab-8d3fca15ac33", 00:19:29.468 "md_size": 32, 00:19:29.468 "md_interleave": false, 00:19:29.468 "dif_type": 0, 00:19:29.468 "assigned_rate_limits": { 00:19:29.468 "rw_ios_per_sec": 0, 00:19:29.468 "rw_mbytes_per_sec": 0, 00:19:29.468 "r_mbytes_per_sec": 0, 00:19:29.468 "w_mbytes_per_sec": 0 00:19:29.468 }, 00:19:29.468 "claimed": false, 00:19:29.468 "zoned": false, 00:19:29.468 "supported_io_types": { 00:19:29.468 "read": true, 00:19:29.468 "write": true, 00:19:29.468 "unmap": false, 00:19:29.468 "flush": false, 00:19:29.468 "reset": true, 00:19:29.468 "nvme_admin": false, 00:19:29.468 "nvme_io": false, 00:19:29.468 "nvme_io_md": false, 00:19:29.468 "write_zeroes": true, 00:19:29.468 "zcopy": false, 00:19:29.468 "get_zone_info": false, 00:19:29.468 "zone_management": false, 00:19:29.468 "zone_append": false, 00:19:29.468 "compare": false, 00:19:29.468 "compare_and_write": false, 00:19:29.468 "abort": false, 00:19:29.468 "seek_hole": false, 00:19:29.468 "seek_data": false, 00:19:29.468 "copy": false, 00:19:29.468 "nvme_iov_md": false 00:19:29.468 }, 00:19:29.468 "memory_domains": [ 00:19:29.468 { 00:19:29.468 "dma_device_id": "system", 00:19:29.468 "dma_device_type": 1 00:19:29.468 }, 00:19:29.469 { 00:19:29.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.469 "dma_device_type": 2 00:19:29.469 }, 00:19:29.469 { 00:19:29.469 "dma_device_id": "system", 00:19:29.469 "dma_device_type": 1 00:19:29.469 }, 00:19:29.469 { 00:19:29.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.469 "dma_device_type": 2 00:19:29.469 } 00:19:29.469 ], 00:19:29.469 "driver_specific": { 00:19:29.469 "raid": { 00:19:29.469 "uuid": "93844104-b1d8-4f1e-b9ab-8d3fca15ac33", 00:19:29.469 "strip_size_kb": 0, 00:19:29.469 "state": "online", 00:19:29.469 "raid_level": "raid1", 00:19:29.469 "superblock": true, 00:19:29.469 "num_base_bdevs": 2, 00:19:29.469 "num_base_bdevs_discovered": 2, 00:19:29.469 "num_base_bdevs_operational": 2, 00:19:29.469 "base_bdevs_list": [ 00:19:29.469 { 00:19:29.469 "name": "pt1", 00:19:29.469 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:29.469 "is_configured": true, 00:19:29.469 "data_offset": 256, 00:19:29.469 "data_size": 7936 00:19:29.469 }, 00:19:29.469 { 00:19:29.469 "name": "pt2", 00:19:29.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:29.469 "is_configured": true, 00:19:29.469 "data_offset": 256, 00:19:29.469 "data_size": 7936 00:19:29.469 } 00:19:29.469 ] 00:19:29.469 } 00:19:29.469 } 00:19:29.469 }' 00:19:29.469 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:29.469 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:29.469 pt2' 00:19:29.469 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:29.469 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:29.469 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:29.469 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:29.469 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.469 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.469 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:29.469 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.469 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:29.469 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:29.469 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:29.469 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:29.469 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:29.469 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.469 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.469 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.729 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.730 [2024-11-20 11:29:12.593502] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=93844104-b1d8-4f1e-b9ab-8d3fca15ac33 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 93844104-b1d8-4f1e-b9ab-8d3fca15ac33 ']' 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.730 [2024-11-20 11:29:12.641064] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:29.730 [2024-11-20 11:29:12.641098] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:29.730 [2024-11-20 11:29:12.641195] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:29.730 [2024-11-20 11:29:12.641273] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:29.730 [2024-11-20 11:29:12.641287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.730 [2024-11-20 11:29:12.780856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:29.730 [2024-11-20 11:29:12.782965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:29.730 [2024-11-20 11:29:12.783061] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:29.730 [2024-11-20 11:29:12.783123] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:29.730 [2024-11-20 11:29:12.783141] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:29.730 [2024-11-20 11:29:12.783153] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:29.730 request: 00:19:29.730 { 00:19:29.730 "name": "raid_bdev1", 00:19:29.730 "raid_level": "raid1", 00:19:29.730 "base_bdevs": [ 00:19:29.730 "malloc1", 00:19:29.730 "malloc2" 00:19:29.730 ], 00:19:29.730 "superblock": false, 00:19:29.730 "method": "bdev_raid_create", 00:19:29.730 "req_id": 1 00:19:29.730 } 00:19:29.730 Got JSON-RPC error response 00:19:29.730 response: 00:19:29.730 { 00:19:29.730 "code": -17, 00:19:29.730 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:29.730 } 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.730 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.990 [2024-11-20 11:29:12.848714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:29.990 [2024-11-20 11:29:12.848780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.990 [2024-11-20 11:29:12.848798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:29.990 [2024-11-20 11:29:12.848811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.990 [2024-11-20 11:29:12.851108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.990 [2024-11-20 11:29:12.851154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:29.990 [2024-11-20 11:29:12.851215] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:29.990 [2024-11-20 11:29:12.851279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:29.990 pt1 00:19:29.990 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.990 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:29.990 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:29.990 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:29.990 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:29.990 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:29.990 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:29.990 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.990 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.991 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.991 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.991 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.991 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.991 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.991 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.991 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.991 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.991 "name": "raid_bdev1", 00:19:29.991 "uuid": "93844104-b1d8-4f1e-b9ab-8d3fca15ac33", 00:19:29.991 "strip_size_kb": 0, 00:19:29.991 "state": "configuring", 00:19:29.991 "raid_level": "raid1", 00:19:29.991 "superblock": true, 00:19:29.991 "num_base_bdevs": 2, 00:19:29.991 "num_base_bdevs_discovered": 1, 00:19:29.991 "num_base_bdevs_operational": 2, 00:19:29.991 "base_bdevs_list": [ 00:19:29.991 { 00:19:29.991 "name": "pt1", 00:19:29.991 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:29.991 "is_configured": true, 00:19:29.991 "data_offset": 256, 00:19:29.991 "data_size": 7936 00:19:29.991 }, 00:19:29.991 { 00:19:29.991 "name": null, 00:19:29.991 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:29.991 "is_configured": false, 00:19:29.991 "data_offset": 256, 00:19:29.991 "data_size": 7936 00:19:29.991 } 00:19:29.991 ] 00:19:29.991 }' 00:19:29.991 11:29:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.991 11:29:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.250 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:30.250 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:30.250 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:30.250 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:30.250 11:29:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.250 11:29:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.250 [2024-11-20 11:29:13.335899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:30.250 [2024-11-20 11:29:13.335998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.250 [2024-11-20 11:29:13.336022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:30.250 [2024-11-20 11:29:13.336036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.250 [2024-11-20 11:29:13.336289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.250 [2024-11-20 11:29:13.336317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:30.250 [2024-11-20 11:29:13.336375] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:30.250 [2024-11-20 11:29:13.336401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:30.250 [2024-11-20 11:29:13.336542] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:30.250 [2024-11-20 11:29:13.336564] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:30.250 [2024-11-20 11:29:13.336644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:30.250 [2024-11-20 11:29:13.336771] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:30.250 [2024-11-20 11:29:13.336787] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:30.250 [2024-11-20 11:29:13.336910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:30.250 pt2 00:19:30.250 11:29:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.250 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:30.250 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:30.250 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:30.250 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:30.250 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:30.250 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:30.250 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:30.250 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:30.250 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.250 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.250 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.250 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.250 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.250 11:29:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.250 11:29:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.250 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.250 11:29:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.510 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.510 "name": "raid_bdev1", 00:19:30.510 "uuid": "93844104-b1d8-4f1e-b9ab-8d3fca15ac33", 00:19:30.510 "strip_size_kb": 0, 00:19:30.510 "state": "online", 00:19:30.510 "raid_level": "raid1", 00:19:30.510 "superblock": true, 00:19:30.510 "num_base_bdevs": 2, 00:19:30.510 "num_base_bdevs_discovered": 2, 00:19:30.510 "num_base_bdevs_operational": 2, 00:19:30.510 "base_bdevs_list": [ 00:19:30.510 { 00:19:30.510 "name": "pt1", 00:19:30.510 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:30.510 "is_configured": true, 00:19:30.510 "data_offset": 256, 00:19:30.510 "data_size": 7936 00:19:30.510 }, 00:19:30.510 { 00:19:30.510 "name": "pt2", 00:19:30.510 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:30.510 "is_configured": true, 00:19:30.510 "data_offset": 256, 00:19:30.510 "data_size": 7936 00:19:30.510 } 00:19:30.510 ] 00:19:30.510 }' 00:19:30.510 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.510 11:29:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.769 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:30.769 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:30.769 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:30.769 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:30.769 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:30.769 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:30.769 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:30.769 11:29:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.769 11:29:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.769 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:30.769 [2024-11-20 11:29:13.807611] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:30.769 11:29:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.769 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:30.769 "name": "raid_bdev1", 00:19:30.769 "aliases": [ 00:19:30.769 "93844104-b1d8-4f1e-b9ab-8d3fca15ac33" 00:19:30.769 ], 00:19:30.769 "product_name": "Raid Volume", 00:19:30.769 "block_size": 4096, 00:19:30.769 "num_blocks": 7936, 00:19:30.769 "uuid": "93844104-b1d8-4f1e-b9ab-8d3fca15ac33", 00:19:30.769 "md_size": 32, 00:19:30.770 "md_interleave": false, 00:19:30.770 "dif_type": 0, 00:19:30.770 "assigned_rate_limits": { 00:19:30.770 "rw_ios_per_sec": 0, 00:19:30.770 "rw_mbytes_per_sec": 0, 00:19:30.770 "r_mbytes_per_sec": 0, 00:19:30.770 "w_mbytes_per_sec": 0 00:19:30.770 }, 00:19:30.770 "claimed": false, 00:19:30.770 "zoned": false, 00:19:30.770 "supported_io_types": { 00:19:30.770 "read": true, 00:19:30.770 "write": true, 00:19:30.770 "unmap": false, 00:19:30.770 "flush": false, 00:19:30.770 "reset": true, 00:19:30.770 "nvme_admin": false, 00:19:30.770 "nvme_io": false, 00:19:30.770 "nvme_io_md": false, 00:19:30.770 "write_zeroes": true, 00:19:30.770 "zcopy": false, 00:19:30.770 "get_zone_info": false, 00:19:30.770 "zone_management": false, 00:19:30.770 "zone_append": false, 00:19:30.770 "compare": false, 00:19:30.770 "compare_and_write": false, 00:19:30.770 "abort": false, 00:19:30.770 "seek_hole": false, 00:19:30.770 "seek_data": false, 00:19:30.770 "copy": false, 00:19:30.770 "nvme_iov_md": false 00:19:30.770 }, 00:19:30.770 "memory_domains": [ 00:19:30.770 { 00:19:30.770 "dma_device_id": "system", 00:19:30.770 "dma_device_type": 1 00:19:30.770 }, 00:19:30.770 { 00:19:30.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.770 "dma_device_type": 2 00:19:30.770 }, 00:19:30.770 { 00:19:30.770 "dma_device_id": "system", 00:19:30.770 "dma_device_type": 1 00:19:30.770 }, 00:19:30.770 { 00:19:30.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.770 "dma_device_type": 2 00:19:30.770 } 00:19:30.770 ], 00:19:30.770 "driver_specific": { 00:19:30.770 "raid": { 00:19:30.770 "uuid": "93844104-b1d8-4f1e-b9ab-8d3fca15ac33", 00:19:30.770 "strip_size_kb": 0, 00:19:30.770 "state": "online", 00:19:30.770 "raid_level": "raid1", 00:19:30.770 "superblock": true, 00:19:30.770 "num_base_bdevs": 2, 00:19:30.770 "num_base_bdevs_discovered": 2, 00:19:30.770 "num_base_bdevs_operational": 2, 00:19:30.770 "base_bdevs_list": [ 00:19:30.770 { 00:19:30.770 "name": "pt1", 00:19:30.770 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:30.770 "is_configured": true, 00:19:30.770 "data_offset": 256, 00:19:30.770 "data_size": 7936 00:19:30.770 }, 00:19:30.770 { 00:19:30.770 "name": "pt2", 00:19:30.770 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:30.770 "is_configured": true, 00:19:30.770 "data_offset": 256, 00:19:30.770 "data_size": 7936 00:19:30.770 } 00:19:30.770 ] 00:19:30.770 } 00:19:30.770 } 00:19:30.770 }' 00:19:30.770 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:31.030 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:31.030 pt2' 00:19:31.030 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:31.030 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:31.030 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:31.030 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:31.030 11:29:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.030 11:29:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.030 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:31.030 11:29:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.030 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:31.030 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:31.030 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:31.030 11:29:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:31.030 [2024-11-20 11:29:14.063185] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 93844104-b1d8-4f1e-b9ab-8d3fca15ac33 '!=' 93844104-b1d8-4f1e-b9ab-8d3fca15ac33 ']' 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.030 [2024-11-20 11:29:14.110850] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.030 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.290 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.290 "name": "raid_bdev1", 00:19:31.290 "uuid": "93844104-b1d8-4f1e-b9ab-8d3fca15ac33", 00:19:31.290 "strip_size_kb": 0, 00:19:31.290 "state": "online", 00:19:31.290 "raid_level": "raid1", 00:19:31.290 "superblock": true, 00:19:31.290 "num_base_bdevs": 2, 00:19:31.290 "num_base_bdevs_discovered": 1, 00:19:31.290 "num_base_bdevs_operational": 1, 00:19:31.290 "base_bdevs_list": [ 00:19:31.290 { 00:19:31.290 "name": null, 00:19:31.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.290 "is_configured": false, 00:19:31.290 "data_offset": 0, 00:19:31.290 "data_size": 7936 00:19:31.290 }, 00:19:31.290 { 00:19:31.290 "name": "pt2", 00:19:31.290 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:31.290 "is_configured": true, 00:19:31.290 "data_offset": 256, 00:19:31.290 "data_size": 7936 00:19:31.290 } 00:19:31.290 ] 00:19:31.290 }' 00:19:31.290 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.290 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.550 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:31.550 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.550 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.550 [2024-11-20 11:29:14.565991] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:31.550 [2024-11-20 11:29:14.566027] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:31.551 [2024-11-20 11:29:14.566108] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:31.551 [2024-11-20 11:29:14.566162] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:31.551 [2024-11-20 11:29:14.566174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.551 [2024-11-20 11:29:14.629861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:31.551 [2024-11-20 11:29:14.629949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.551 [2024-11-20 11:29:14.629970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:31.551 [2024-11-20 11:29:14.629983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.551 [2024-11-20 11:29:14.632223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.551 [2024-11-20 11:29:14.632269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:31.551 [2024-11-20 11:29:14.632325] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:31.551 [2024-11-20 11:29:14.632377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:31.551 [2024-11-20 11:29:14.632510] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:31.551 [2024-11-20 11:29:14.632533] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:31.551 [2024-11-20 11:29:14.632616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:31.551 [2024-11-20 11:29:14.632741] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:31.551 [2024-11-20 11:29:14.632757] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:31.551 [2024-11-20 11:29:14.632860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.551 pt2 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.551 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.809 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.809 "name": "raid_bdev1", 00:19:31.809 "uuid": "93844104-b1d8-4f1e-b9ab-8d3fca15ac33", 00:19:31.809 "strip_size_kb": 0, 00:19:31.809 "state": "online", 00:19:31.809 "raid_level": "raid1", 00:19:31.809 "superblock": true, 00:19:31.809 "num_base_bdevs": 2, 00:19:31.809 "num_base_bdevs_discovered": 1, 00:19:31.809 "num_base_bdevs_operational": 1, 00:19:31.809 "base_bdevs_list": [ 00:19:31.809 { 00:19:31.809 "name": null, 00:19:31.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.809 "is_configured": false, 00:19:31.809 "data_offset": 256, 00:19:31.809 "data_size": 7936 00:19:31.809 }, 00:19:31.809 { 00:19:31.809 "name": "pt2", 00:19:31.809 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:31.809 "is_configured": true, 00:19:31.809 "data_offset": 256, 00:19:31.809 "data_size": 7936 00:19:31.809 } 00:19:31.809 ] 00:19:31.809 }' 00:19:31.809 11:29:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.809 11:29:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.067 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:32.067 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.067 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.067 [2024-11-20 11:29:15.097052] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:32.067 [2024-11-20 11:29:15.097091] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:32.067 [2024-11-20 11:29:15.097165] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:32.067 [2024-11-20 11:29:15.097217] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:32.067 [2024-11-20 11:29:15.097227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.068 [2024-11-20 11:29:15.152996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:32.068 [2024-11-20 11:29:15.153063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.068 [2024-11-20 11:29:15.153086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:32.068 [2024-11-20 11:29:15.153097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.068 [2024-11-20 11:29:15.155327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.068 [2024-11-20 11:29:15.155367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:32.068 [2024-11-20 11:29:15.155428] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:32.068 [2024-11-20 11:29:15.155503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:32.068 [2024-11-20 11:29:15.155649] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:32.068 [2024-11-20 11:29:15.155663] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:32.068 [2024-11-20 11:29:15.155684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:32.068 [2024-11-20 11:29:15.155759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:32.068 [2024-11-20 11:29:15.155859] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:32.068 [2024-11-20 11:29:15.155889] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:32.068 [2024-11-20 11:29:15.155976] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:32.068 [2024-11-20 11:29:15.156109] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:32.068 [2024-11-20 11:29:15.156129] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:32.068 [2024-11-20 11:29:15.156260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.068 pt1 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.068 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.327 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.327 "name": "raid_bdev1", 00:19:32.327 "uuid": "93844104-b1d8-4f1e-b9ab-8d3fca15ac33", 00:19:32.327 "strip_size_kb": 0, 00:19:32.327 "state": "online", 00:19:32.327 "raid_level": "raid1", 00:19:32.327 "superblock": true, 00:19:32.327 "num_base_bdevs": 2, 00:19:32.327 "num_base_bdevs_discovered": 1, 00:19:32.327 "num_base_bdevs_operational": 1, 00:19:32.327 "base_bdevs_list": [ 00:19:32.327 { 00:19:32.327 "name": null, 00:19:32.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.327 "is_configured": false, 00:19:32.327 "data_offset": 256, 00:19:32.327 "data_size": 7936 00:19:32.327 }, 00:19:32.327 { 00:19:32.327 "name": "pt2", 00:19:32.327 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:32.327 "is_configured": true, 00:19:32.327 "data_offset": 256, 00:19:32.327 "data_size": 7936 00:19:32.327 } 00:19:32.327 ] 00:19:32.327 }' 00:19:32.327 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.327 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.585 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:32.585 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.585 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.585 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:32.585 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.585 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:32.585 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:32.585 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.585 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.585 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:32.585 [2024-11-20 11:29:15.648423] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:32.585 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.585 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 93844104-b1d8-4f1e-b9ab-8d3fca15ac33 '!=' 93844104-b1d8-4f1e-b9ab-8d3fca15ac33 ']' 00:19:32.585 11:29:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87685 00:19:32.585 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87685 ']' 00:19:32.585 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87685 00:19:32.585 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:32.585 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.844 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87685 00:19:32.844 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:32.844 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:32.844 killing process with pid 87685 00:19:32.844 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87685' 00:19:32.844 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87685 00:19:32.844 [2024-11-20 11:29:15.721177] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:32.844 [2024-11-20 11:29:15.721294] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:32.844 [2024-11-20 11:29:15.721350] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:32.844 [2024-11-20 11:29:15.721368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:32.844 11:29:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87685 00:19:32.844 [2024-11-20 11:29:15.952930] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:34.224 11:29:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:19:34.224 00:19:34.224 real 0m6.375s 00:19:34.224 user 0m9.659s 00:19:34.224 sys 0m1.146s 00:19:34.224 11:29:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.224 11:29:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.224 ************************************ 00:19:34.224 END TEST raid_superblock_test_md_separate 00:19:34.224 ************************************ 00:19:34.224 11:29:17 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:19:34.224 11:29:17 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:19:34.224 11:29:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:34.224 11:29:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.224 11:29:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.224 ************************************ 00:19:34.224 START TEST raid_rebuild_test_sb_md_separate 00:19:34.224 ************************************ 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88013 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88013 00:19:34.224 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88013 ']' 00:19:34.225 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.225 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.225 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.225 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.225 11:29:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.225 [2024-11-20 11:29:17.264144] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:19:34.225 [2024-11-20 11:29:17.264264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88013 ] 00:19:34.225 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:34.225 Zero copy mechanism will not be used. 00:19:34.484 [2024-11-20 11:29:17.421530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.484 [2024-11-20 11:29:17.535784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.742 [2024-11-20 11:29:17.737870] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:34.742 [2024-11-20 11:29:17.737916] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:35.001 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.001 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:35.001 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:35.001 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:19:35.001 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.001 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.260 BaseBdev1_malloc 00:19:35.260 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.260 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:35.260 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.260 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.260 [2024-11-20 11:29:18.149394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:35.260 [2024-11-20 11:29:18.149511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.260 [2024-11-20 11:29:18.149586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:35.260 [2024-11-20 11:29:18.149625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.260 [2024-11-20 11:29:18.151604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.260 [2024-11-20 11:29:18.151678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:35.260 BaseBdev1 00:19:35.260 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.260 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:35.260 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:19:35.260 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.260 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.260 BaseBdev2_malloc 00:19:35.260 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.260 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:35.260 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.260 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.260 [2024-11-20 11:29:18.206334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:35.260 [2024-11-20 11:29:18.206397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.260 [2024-11-20 11:29:18.206416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:35.260 [2024-11-20 11:29:18.206427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.260 [2024-11-20 11:29:18.208318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.260 [2024-11-20 11:29:18.208397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:35.260 BaseBdev2 00:19:35.260 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.260 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:19:35.260 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.260 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.260 spare_malloc 00:19:35.260 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.260 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:35.260 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.260 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.260 spare_delay 00:19:35.260 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.260 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:35.261 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.261 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.261 [2024-11-20 11:29:18.294223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:35.261 [2024-11-20 11:29:18.294287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.261 [2024-11-20 11:29:18.294309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:35.261 [2024-11-20 11:29:18.294320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.261 [2024-11-20 11:29:18.296237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.261 [2024-11-20 11:29:18.296328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:35.261 spare 00:19:35.261 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.261 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:35.261 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.261 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.261 [2024-11-20 11:29:18.306233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:35.261 [2024-11-20 11:29:18.308074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:35.261 [2024-11-20 11:29:18.308248] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:35.261 [2024-11-20 11:29:18.308263] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:35.261 [2024-11-20 11:29:18.308334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:35.261 [2024-11-20 11:29:18.308468] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:35.261 [2024-11-20 11:29:18.308477] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:35.261 [2024-11-20 11:29:18.308582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:35.261 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.261 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:35.261 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.261 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.261 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.261 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.261 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:35.261 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.261 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.261 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.261 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.261 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.261 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.261 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.261 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.261 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.261 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.261 "name": "raid_bdev1", 00:19:35.261 "uuid": "d7bc1de0-f7b1-4165-872e-354c6b93e4d1", 00:19:35.261 "strip_size_kb": 0, 00:19:35.261 "state": "online", 00:19:35.261 "raid_level": "raid1", 00:19:35.261 "superblock": true, 00:19:35.261 "num_base_bdevs": 2, 00:19:35.261 "num_base_bdevs_discovered": 2, 00:19:35.261 "num_base_bdevs_operational": 2, 00:19:35.261 "base_bdevs_list": [ 00:19:35.261 { 00:19:35.261 "name": "BaseBdev1", 00:19:35.261 "uuid": "7f8ec35e-4207-5c28-af41-fb1b54791f36", 00:19:35.261 "is_configured": true, 00:19:35.261 "data_offset": 256, 00:19:35.261 "data_size": 7936 00:19:35.261 }, 00:19:35.261 { 00:19:35.261 "name": "BaseBdev2", 00:19:35.261 "uuid": "bcd4880b-8142-5aed-96f1-97d46888b5d6", 00:19:35.261 "is_configured": true, 00:19:35.261 "data_offset": 256, 00:19:35.261 "data_size": 7936 00:19:35.261 } 00:19:35.261 ] 00:19:35.261 }' 00:19:35.261 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.261 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.829 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:35.829 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:35.829 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.829 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.830 [2024-11-20 11:29:18.769745] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:35.830 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.830 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:35.830 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:35.830 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.830 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.830 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.830 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.830 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:35.830 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:35.830 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:35.830 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:35.830 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:35.830 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:35.830 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:35.830 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:35.830 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:35.830 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:35.830 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:35.830 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:35.830 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:35.830 11:29:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:36.088 [2024-11-20 11:29:19.049044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:36.088 /dev/nbd0 00:19:36.088 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:36.088 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:36.088 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:36.088 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:36.088 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:36.088 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:36.088 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:36.088 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:36.088 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:36.088 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:36.088 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:36.088 1+0 records in 00:19:36.088 1+0 records out 00:19:36.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382759 s, 10.7 MB/s 00:19:36.088 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:36.088 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:36.088 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:36.088 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:36.088 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:36.088 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:36.088 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:36.088 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:36.088 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:36.088 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:37.025 7936+0 records in 00:19:37.025 7936+0 records out 00:19:37.025 32505856 bytes (33 MB, 31 MiB) copied, 0.664411 s, 48.9 MB/s 00:19:37.025 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:37.025 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:37.025 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:37.025 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:37.025 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:37.025 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:37.025 11:29:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:37.025 [2024-11-20 11:29:20.030671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.025 [2024-11-20 11:29:20.050869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.025 "name": "raid_bdev1", 00:19:37.025 "uuid": "d7bc1de0-f7b1-4165-872e-354c6b93e4d1", 00:19:37.025 "strip_size_kb": 0, 00:19:37.025 "state": "online", 00:19:37.025 "raid_level": "raid1", 00:19:37.025 "superblock": true, 00:19:37.025 "num_base_bdevs": 2, 00:19:37.025 "num_base_bdevs_discovered": 1, 00:19:37.025 "num_base_bdevs_operational": 1, 00:19:37.025 "base_bdevs_list": [ 00:19:37.025 { 00:19:37.025 "name": null, 00:19:37.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.025 "is_configured": false, 00:19:37.025 "data_offset": 0, 00:19:37.025 "data_size": 7936 00:19:37.025 }, 00:19:37.025 { 00:19:37.025 "name": "BaseBdev2", 00:19:37.025 "uuid": "bcd4880b-8142-5aed-96f1-97d46888b5d6", 00:19:37.025 "is_configured": true, 00:19:37.025 "data_offset": 256, 00:19:37.025 "data_size": 7936 00:19:37.025 } 00:19:37.025 ] 00:19:37.025 }' 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.025 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.595 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:37.595 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.595 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.595 [2024-11-20 11:29:20.458211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:37.595 [2024-11-20 11:29:20.476175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:37.595 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.595 11:29:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:37.595 [2024-11-20 11:29:20.478381] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:38.535 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:38.535 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.535 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:38.535 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:38.535 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.535 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.535 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.535 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.535 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.535 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.535 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.535 "name": "raid_bdev1", 00:19:38.535 "uuid": "d7bc1de0-f7b1-4165-872e-354c6b93e4d1", 00:19:38.535 "strip_size_kb": 0, 00:19:38.535 "state": "online", 00:19:38.535 "raid_level": "raid1", 00:19:38.535 "superblock": true, 00:19:38.535 "num_base_bdevs": 2, 00:19:38.535 "num_base_bdevs_discovered": 2, 00:19:38.535 "num_base_bdevs_operational": 2, 00:19:38.535 "process": { 00:19:38.535 "type": "rebuild", 00:19:38.535 "target": "spare", 00:19:38.535 "progress": { 00:19:38.535 "blocks": 2560, 00:19:38.535 "percent": 32 00:19:38.535 } 00:19:38.535 }, 00:19:38.535 "base_bdevs_list": [ 00:19:38.535 { 00:19:38.535 "name": "spare", 00:19:38.535 "uuid": "3e541e7f-9d5e-52d9-acd8-dc872784d03f", 00:19:38.535 "is_configured": true, 00:19:38.535 "data_offset": 256, 00:19:38.535 "data_size": 7936 00:19:38.535 }, 00:19:38.535 { 00:19:38.535 "name": "BaseBdev2", 00:19:38.535 "uuid": "bcd4880b-8142-5aed-96f1-97d46888b5d6", 00:19:38.535 "is_configured": true, 00:19:38.535 "data_offset": 256, 00:19:38.535 "data_size": 7936 00:19:38.535 } 00:19:38.535 ] 00:19:38.535 }' 00:19:38.535 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.535 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:38.535 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.535 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:38.535 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:38.535 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.535 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.535 [2024-11-20 11:29:21.633894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:38.795 [2024-11-20 11:29:21.684375] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:38.795 [2024-11-20 11:29:21.684467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.795 [2024-11-20 11:29:21.684486] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:38.795 [2024-11-20 11:29:21.684497] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:38.795 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.795 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:38.795 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.795 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.795 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:38.795 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:38.795 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:38.795 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.795 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.795 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.795 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.795 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.795 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.795 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.795 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.795 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.795 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.795 "name": "raid_bdev1", 00:19:38.795 "uuid": "d7bc1de0-f7b1-4165-872e-354c6b93e4d1", 00:19:38.795 "strip_size_kb": 0, 00:19:38.795 "state": "online", 00:19:38.795 "raid_level": "raid1", 00:19:38.795 "superblock": true, 00:19:38.795 "num_base_bdevs": 2, 00:19:38.795 "num_base_bdevs_discovered": 1, 00:19:38.795 "num_base_bdevs_operational": 1, 00:19:38.795 "base_bdevs_list": [ 00:19:38.795 { 00:19:38.795 "name": null, 00:19:38.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.795 "is_configured": false, 00:19:38.795 "data_offset": 0, 00:19:38.795 "data_size": 7936 00:19:38.795 }, 00:19:38.795 { 00:19:38.795 "name": "BaseBdev2", 00:19:38.795 "uuid": "bcd4880b-8142-5aed-96f1-97d46888b5d6", 00:19:38.795 "is_configured": true, 00:19:38.795 "data_offset": 256, 00:19:38.795 "data_size": 7936 00:19:38.795 } 00:19:38.795 ] 00:19:38.795 }' 00:19:38.795 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.795 11:29:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.055 11:29:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:39.055 11:29:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:39.055 11:29:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:39.055 11:29:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:39.055 11:29:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:39.055 11:29:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.055 11:29:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.055 11:29:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.055 11:29:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.055 11:29:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.315 11:29:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:39.315 "name": "raid_bdev1", 00:19:39.315 "uuid": "d7bc1de0-f7b1-4165-872e-354c6b93e4d1", 00:19:39.315 "strip_size_kb": 0, 00:19:39.315 "state": "online", 00:19:39.315 "raid_level": "raid1", 00:19:39.315 "superblock": true, 00:19:39.315 "num_base_bdevs": 2, 00:19:39.315 "num_base_bdevs_discovered": 1, 00:19:39.315 "num_base_bdevs_operational": 1, 00:19:39.315 "base_bdevs_list": [ 00:19:39.315 { 00:19:39.315 "name": null, 00:19:39.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.315 "is_configured": false, 00:19:39.315 "data_offset": 0, 00:19:39.315 "data_size": 7936 00:19:39.315 }, 00:19:39.315 { 00:19:39.315 "name": "BaseBdev2", 00:19:39.315 "uuid": "bcd4880b-8142-5aed-96f1-97d46888b5d6", 00:19:39.315 "is_configured": true, 00:19:39.315 "data_offset": 256, 00:19:39.315 "data_size": 7936 00:19:39.315 } 00:19:39.315 ] 00:19:39.315 }' 00:19:39.315 11:29:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:39.315 11:29:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:39.315 11:29:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:39.315 11:29:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:39.315 11:29:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:39.315 11:29:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.315 11:29:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.315 [2024-11-20 11:29:22.288901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:39.315 [2024-11-20 11:29:22.304507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:39.315 11:29:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.315 11:29:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:39.315 [2024-11-20 11:29:22.306380] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:40.254 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:40.254 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:40.254 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:40.254 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:40.254 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:40.254 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.254 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.254 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.254 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.254 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.254 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:40.254 "name": "raid_bdev1", 00:19:40.254 "uuid": "d7bc1de0-f7b1-4165-872e-354c6b93e4d1", 00:19:40.254 "strip_size_kb": 0, 00:19:40.254 "state": "online", 00:19:40.254 "raid_level": "raid1", 00:19:40.254 "superblock": true, 00:19:40.254 "num_base_bdevs": 2, 00:19:40.254 "num_base_bdevs_discovered": 2, 00:19:40.254 "num_base_bdevs_operational": 2, 00:19:40.254 "process": { 00:19:40.254 "type": "rebuild", 00:19:40.254 "target": "spare", 00:19:40.254 "progress": { 00:19:40.254 "blocks": 2560, 00:19:40.254 "percent": 32 00:19:40.255 } 00:19:40.255 }, 00:19:40.255 "base_bdevs_list": [ 00:19:40.255 { 00:19:40.255 "name": "spare", 00:19:40.255 "uuid": "3e541e7f-9d5e-52d9-acd8-dc872784d03f", 00:19:40.255 "is_configured": true, 00:19:40.255 "data_offset": 256, 00:19:40.255 "data_size": 7936 00:19:40.255 }, 00:19:40.255 { 00:19:40.255 "name": "BaseBdev2", 00:19:40.255 "uuid": "bcd4880b-8142-5aed-96f1-97d46888b5d6", 00:19:40.255 "is_configured": true, 00:19:40.255 "data_offset": 256, 00:19:40.255 "data_size": 7936 00:19:40.255 } 00:19:40.255 ] 00:19:40.255 }' 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:40.515 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=729 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:40.515 "name": "raid_bdev1", 00:19:40.515 "uuid": "d7bc1de0-f7b1-4165-872e-354c6b93e4d1", 00:19:40.515 "strip_size_kb": 0, 00:19:40.515 "state": "online", 00:19:40.515 "raid_level": "raid1", 00:19:40.515 "superblock": true, 00:19:40.515 "num_base_bdevs": 2, 00:19:40.515 "num_base_bdevs_discovered": 2, 00:19:40.515 "num_base_bdevs_operational": 2, 00:19:40.515 "process": { 00:19:40.515 "type": "rebuild", 00:19:40.515 "target": "spare", 00:19:40.515 "progress": { 00:19:40.515 "blocks": 2816, 00:19:40.515 "percent": 35 00:19:40.515 } 00:19:40.515 }, 00:19:40.515 "base_bdevs_list": [ 00:19:40.515 { 00:19:40.515 "name": "spare", 00:19:40.515 "uuid": "3e541e7f-9d5e-52d9-acd8-dc872784d03f", 00:19:40.515 "is_configured": true, 00:19:40.515 "data_offset": 256, 00:19:40.515 "data_size": 7936 00:19:40.515 }, 00:19:40.515 { 00:19:40.515 "name": "BaseBdev2", 00:19:40.515 "uuid": "bcd4880b-8142-5aed-96f1-97d46888b5d6", 00:19:40.515 "is_configured": true, 00:19:40.515 "data_offset": 256, 00:19:40.515 "data_size": 7936 00:19:40.515 } 00:19:40.515 ] 00:19:40.515 }' 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:40.515 11:29:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:42.000 11:29:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:42.000 11:29:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:42.000 11:29:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:42.000 11:29:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:42.000 11:29:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:42.000 11:29:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:42.000 11:29:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.000 11:29:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.000 11:29:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.000 11:29:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.000 11:29:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.000 11:29:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:42.000 "name": "raid_bdev1", 00:19:42.000 "uuid": "d7bc1de0-f7b1-4165-872e-354c6b93e4d1", 00:19:42.000 "strip_size_kb": 0, 00:19:42.000 "state": "online", 00:19:42.000 "raid_level": "raid1", 00:19:42.000 "superblock": true, 00:19:42.000 "num_base_bdevs": 2, 00:19:42.000 "num_base_bdevs_discovered": 2, 00:19:42.000 "num_base_bdevs_operational": 2, 00:19:42.000 "process": { 00:19:42.000 "type": "rebuild", 00:19:42.000 "target": "spare", 00:19:42.000 "progress": { 00:19:42.000 "blocks": 5632, 00:19:42.000 "percent": 70 00:19:42.000 } 00:19:42.000 }, 00:19:42.000 "base_bdevs_list": [ 00:19:42.000 { 00:19:42.000 "name": "spare", 00:19:42.000 "uuid": "3e541e7f-9d5e-52d9-acd8-dc872784d03f", 00:19:42.000 "is_configured": true, 00:19:42.000 "data_offset": 256, 00:19:42.000 "data_size": 7936 00:19:42.000 }, 00:19:42.000 { 00:19:42.000 "name": "BaseBdev2", 00:19:42.000 "uuid": "bcd4880b-8142-5aed-96f1-97d46888b5d6", 00:19:42.000 "is_configured": true, 00:19:42.000 "data_offset": 256, 00:19:42.000 "data_size": 7936 00:19:42.000 } 00:19:42.000 ] 00:19:42.000 }' 00:19:42.000 11:29:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:42.000 11:29:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:42.000 11:29:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:42.000 11:29:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:42.000 11:29:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:42.569 [2024-11-20 11:29:25.421641] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:42.569 [2024-11-20 11:29:25.421822] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:42.569 [2024-11-20 11:29:25.421990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:42.828 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:42.828 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:42.828 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:42.828 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:42.828 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:42.828 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:42.828 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.828 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.828 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.828 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.828 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.828 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:42.828 "name": "raid_bdev1", 00:19:42.828 "uuid": "d7bc1de0-f7b1-4165-872e-354c6b93e4d1", 00:19:42.828 "strip_size_kb": 0, 00:19:42.828 "state": "online", 00:19:42.828 "raid_level": "raid1", 00:19:42.828 "superblock": true, 00:19:42.828 "num_base_bdevs": 2, 00:19:42.828 "num_base_bdevs_discovered": 2, 00:19:42.828 "num_base_bdevs_operational": 2, 00:19:42.828 "base_bdevs_list": [ 00:19:42.828 { 00:19:42.828 "name": "spare", 00:19:42.828 "uuid": "3e541e7f-9d5e-52d9-acd8-dc872784d03f", 00:19:42.828 "is_configured": true, 00:19:42.828 "data_offset": 256, 00:19:42.828 "data_size": 7936 00:19:42.828 }, 00:19:42.828 { 00:19:42.828 "name": "BaseBdev2", 00:19:42.828 "uuid": "bcd4880b-8142-5aed-96f1-97d46888b5d6", 00:19:42.828 "is_configured": true, 00:19:42.828 "data_offset": 256, 00:19:42.828 "data_size": 7936 00:19:42.828 } 00:19:42.828 ] 00:19:42.828 }' 00:19:42.828 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:42.828 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:42.828 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:42.828 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:42.828 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:19:42.828 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:42.828 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:42.828 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:42.828 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:42.828 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:42.828 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.087 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.087 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.088 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.088 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.088 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:43.088 "name": "raid_bdev1", 00:19:43.088 "uuid": "d7bc1de0-f7b1-4165-872e-354c6b93e4d1", 00:19:43.088 "strip_size_kb": 0, 00:19:43.088 "state": "online", 00:19:43.088 "raid_level": "raid1", 00:19:43.088 "superblock": true, 00:19:43.088 "num_base_bdevs": 2, 00:19:43.088 "num_base_bdevs_discovered": 2, 00:19:43.088 "num_base_bdevs_operational": 2, 00:19:43.088 "base_bdevs_list": [ 00:19:43.088 { 00:19:43.088 "name": "spare", 00:19:43.088 "uuid": "3e541e7f-9d5e-52d9-acd8-dc872784d03f", 00:19:43.088 "is_configured": true, 00:19:43.088 "data_offset": 256, 00:19:43.088 "data_size": 7936 00:19:43.088 }, 00:19:43.088 { 00:19:43.088 "name": "BaseBdev2", 00:19:43.088 "uuid": "bcd4880b-8142-5aed-96f1-97d46888b5d6", 00:19:43.088 "is_configured": true, 00:19:43.088 "data_offset": 256, 00:19:43.088 "data_size": 7936 00:19:43.088 } 00:19:43.088 ] 00:19:43.088 }' 00:19:43.088 11:29:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:43.088 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:43.088 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:43.088 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:43.088 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:43.088 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.088 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.088 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:43.088 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:43.088 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:43.088 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.088 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.088 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.088 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.088 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.088 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.088 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.088 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.088 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.088 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.088 "name": "raid_bdev1", 00:19:43.088 "uuid": "d7bc1de0-f7b1-4165-872e-354c6b93e4d1", 00:19:43.088 "strip_size_kb": 0, 00:19:43.088 "state": "online", 00:19:43.088 "raid_level": "raid1", 00:19:43.088 "superblock": true, 00:19:43.088 "num_base_bdevs": 2, 00:19:43.088 "num_base_bdevs_discovered": 2, 00:19:43.088 "num_base_bdevs_operational": 2, 00:19:43.088 "base_bdevs_list": [ 00:19:43.088 { 00:19:43.088 "name": "spare", 00:19:43.088 "uuid": "3e541e7f-9d5e-52d9-acd8-dc872784d03f", 00:19:43.088 "is_configured": true, 00:19:43.088 "data_offset": 256, 00:19:43.088 "data_size": 7936 00:19:43.088 }, 00:19:43.088 { 00:19:43.088 "name": "BaseBdev2", 00:19:43.088 "uuid": "bcd4880b-8142-5aed-96f1-97d46888b5d6", 00:19:43.088 "is_configured": true, 00:19:43.088 "data_offset": 256, 00:19:43.088 "data_size": 7936 00:19:43.088 } 00:19:43.088 ] 00:19:43.088 }' 00:19:43.088 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.088 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.657 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:43.657 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.657 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.657 [2024-11-20 11:29:26.550479] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:43.657 [2024-11-20 11:29:26.550520] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:43.657 [2024-11-20 11:29:26.550609] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:43.657 [2024-11-20 11:29:26.550700] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:43.657 [2024-11-20 11:29:26.550717] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:43.657 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.657 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.657 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.657 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.657 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:19:43.657 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.657 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:43.657 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:43.657 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:43.657 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:43.657 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:43.657 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:43.657 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:43.657 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:43.657 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:43.657 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:43.657 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:43.657 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:43.657 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:43.917 /dev/nbd0 00:19:43.917 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:43.917 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:43.917 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:43.917 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:43.917 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:43.917 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:43.917 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:43.917 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:43.917 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:43.917 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:43.917 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:43.917 1+0 records in 00:19:43.917 1+0 records out 00:19:43.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376858 s, 10.9 MB/s 00:19:43.917 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:43.917 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:43.917 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:43.917 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:43.917 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:43.917 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:43.917 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:43.917 11:29:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:44.177 /dev/nbd1 00:19:44.177 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:44.177 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:44.177 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:44.177 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:44.177 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:44.177 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:44.177 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:44.177 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:44.177 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:44.177 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:44.177 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:44.177 1+0 records in 00:19:44.177 1+0 records out 00:19:44.177 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315001 s, 13.0 MB/s 00:19:44.177 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:44.177 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:44.177 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:44.177 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:44.177 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:44.178 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:44.178 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:44.178 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:44.438 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:44.438 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:44.438 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:44.438 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:44.438 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:44.438 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:44.438 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:44.697 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:44.697 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:44.697 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:44.697 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:44.697 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:44.698 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:44.698 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:44.698 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:44.698 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:44.698 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:44.958 [2024-11-20 11:29:27.864110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:44.958 [2024-11-20 11:29:27.864190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.958 [2024-11-20 11:29:27.864218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:44.958 [2024-11-20 11:29:27.864228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.958 [2024-11-20 11:29:27.866406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.958 [2024-11-20 11:29:27.866448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:44.958 [2024-11-20 11:29:27.866531] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:44.958 [2024-11-20 11:29:27.866589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:44.958 [2024-11-20 11:29:27.866734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:44.958 spare 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:44.958 [2024-11-20 11:29:27.966637] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:44.958 [2024-11-20 11:29:27.966691] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:44.958 [2024-11-20 11:29:27.966847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:44.958 [2024-11-20 11:29:27.967039] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:44.958 [2024-11-20 11:29:27.967055] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:44.958 [2024-11-20 11:29:27.967209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:44.958 11:29:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.958 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.958 "name": "raid_bdev1", 00:19:44.958 "uuid": "d7bc1de0-f7b1-4165-872e-354c6b93e4d1", 00:19:44.958 "strip_size_kb": 0, 00:19:44.958 "state": "online", 00:19:44.958 "raid_level": "raid1", 00:19:44.958 "superblock": true, 00:19:44.958 "num_base_bdevs": 2, 00:19:44.958 "num_base_bdevs_discovered": 2, 00:19:44.958 "num_base_bdevs_operational": 2, 00:19:44.958 "base_bdevs_list": [ 00:19:44.958 { 00:19:44.958 "name": "spare", 00:19:44.958 "uuid": "3e541e7f-9d5e-52d9-acd8-dc872784d03f", 00:19:44.958 "is_configured": true, 00:19:44.958 "data_offset": 256, 00:19:44.958 "data_size": 7936 00:19:44.958 }, 00:19:44.958 { 00:19:44.958 "name": "BaseBdev2", 00:19:44.958 "uuid": "bcd4880b-8142-5aed-96f1-97d46888b5d6", 00:19:44.958 "is_configured": true, 00:19:44.958 "data_offset": 256, 00:19:44.958 "data_size": 7936 00:19:44.958 } 00:19:44.958 ] 00:19:44.958 }' 00:19:44.958 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.958 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:45.526 "name": "raid_bdev1", 00:19:45.526 "uuid": "d7bc1de0-f7b1-4165-872e-354c6b93e4d1", 00:19:45.526 "strip_size_kb": 0, 00:19:45.526 "state": "online", 00:19:45.526 "raid_level": "raid1", 00:19:45.526 "superblock": true, 00:19:45.526 "num_base_bdevs": 2, 00:19:45.526 "num_base_bdevs_discovered": 2, 00:19:45.526 "num_base_bdevs_operational": 2, 00:19:45.526 "base_bdevs_list": [ 00:19:45.526 { 00:19:45.526 "name": "spare", 00:19:45.526 "uuid": "3e541e7f-9d5e-52d9-acd8-dc872784d03f", 00:19:45.526 "is_configured": true, 00:19:45.526 "data_offset": 256, 00:19:45.526 "data_size": 7936 00:19:45.526 }, 00:19:45.526 { 00:19:45.526 "name": "BaseBdev2", 00:19:45.526 "uuid": "bcd4880b-8142-5aed-96f1-97d46888b5d6", 00:19:45.526 "is_configured": true, 00:19:45.526 "data_offset": 256, 00:19:45.526 "data_size": 7936 00:19:45.526 } 00:19:45.526 ] 00:19:45.526 }' 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.526 [2024-11-20 11:29:28.631479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.526 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.786 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.786 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.786 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.786 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.786 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.786 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:45.786 "name": "raid_bdev1", 00:19:45.786 "uuid": "d7bc1de0-f7b1-4165-872e-354c6b93e4d1", 00:19:45.786 "strip_size_kb": 0, 00:19:45.786 "state": "online", 00:19:45.786 "raid_level": "raid1", 00:19:45.786 "superblock": true, 00:19:45.786 "num_base_bdevs": 2, 00:19:45.786 "num_base_bdevs_discovered": 1, 00:19:45.786 "num_base_bdevs_operational": 1, 00:19:45.786 "base_bdevs_list": [ 00:19:45.786 { 00:19:45.786 "name": null, 00:19:45.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.786 "is_configured": false, 00:19:45.786 "data_offset": 0, 00:19:45.786 "data_size": 7936 00:19:45.786 }, 00:19:45.786 { 00:19:45.786 "name": "BaseBdev2", 00:19:45.786 "uuid": "bcd4880b-8142-5aed-96f1-97d46888b5d6", 00:19:45.786 "is_configured": true, 00:19:45.786 "data_offset": 256, 00:19:45.786 "data_size": 7936 00:19:45.786 } 00:19:45.786 ] 00:19:45.786 }' 00:19:45.786 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:45.786 11:29:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.045 11:29:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:46.045 11:29:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.045 11:29:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.045 [2024-11-20 11:29:29.094699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:46.045 [2024-11-20 11:29:29.094910] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:46.045 [2024-11-20 11:29:29.094931] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:46.045 [2024-11-20 11:29:29.094969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:46.045 [2024-11-20 11:29:29.111395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:46.045 11:29:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.045 11:29:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:46.045 [2024-11-20 11:29:29.113576] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:47.431 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:47.431 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.431 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:47.431 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:47.431 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.431 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.431 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.431 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.431 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.431 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.431 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.431 "name": "raid_bdev1", 00:19:47.431 "uuid": "d7bc1de0-f7b1-4165-872e-354c6b93e4d1", 00:19:47.431 "strip_size_kb": 0, 00:19:47.431 "state": "online", 00:19:47.431 "raid_level": "raid1", 00:19:47.431 "superblock": true, 00:19:47.431 "num_base_bdevs": 2, 00:19:47.431 "num_base_bdevs_discovered": 2, 00:19:47.431 "num_base_bdevs_operational": 2, 00:19:47.431 "process": { 00:19:47.431 "type": "rebuild", 00:19:47.431 "target": "spare", 00:19:47.431 "progress": { 00:19:47.431 "blocks": 2560, 00:19:47.431 "percent": 32 00:19:47.431 } 00:19:47.431 }, 00:19:47.431 "base_bdevs_list": [ 00:19:47.431 { 00:19:47.431 "name": "spare", 00:19:47.431 "uuid": "3e541e7f-9d5e-52d9-acd8-dc872784d03f", 00:19:47.431 "is_configured": true, 00:19:47.431 "data_offset": 256, 00:19:47.431 "data_size": 7936 00:19:47.431 }, 00:19:47.431 { 00:19:47.431 "name": "BaseBdev2", 00:19:47.431 "uuid": "bcd4880b-8142-5aed-96f1-97d46888b5d6", 00:19:47.431 "is_configured": true, 00:19:47.431 "data_offset": 256, 00:19:47.431 "data_size": 7936 00:19:47.431 } 00:19:47.431 ] 00:19:47.431 }' 00:19:47.431 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.431 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:47.431 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.431 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:47.431 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:47.431 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.431 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.431 [2024-11-20 11:29:30.273211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:47.431 [2024-11-20 11:29:30.319526] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:47.431 [2024-11-20 11:29:30.319607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.431 [2024-11-20 11:29:30.319624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:47.431 [2024-11-20 11:29:30.319647] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:47.431 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.431 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:47.432 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.432 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.432 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.432 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.432 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:47.432 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.432 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.432 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.432 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.432 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.432 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.432 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.432 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.432 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.432 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.432 "name": "raid_bdev1", 00:19:47.432 "uuid": "d7bc1de0-f7b1-4165-872e-354c6b93e4d1", 00:19:47.432 "strip_size_kb": 0, 00:19:47.432 "state": "online", 00:19:47.432 "raid_level": "raid1", 00:19:47.432 "superblock": true, 00:19:47.432 "num_base_bdevs": 2, 00:19:47.432 "num_base_bdevs_discovered": 1, 00:19:47.432 "num_base_bdevs_operational": 1, 00:19:47.432 "base_bdevs_list": [ 00:19:47.432 { 00:19:47.432 "name": null, 00:19:47.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.432 "is_configured": false, 00:19:47.432 "data_offset": 0, 00:19:47.432 "data_size": 7936 00:19:47.432 }, 00:19:47.432 { 00:19:47.432 "name": "BaseBdev2", 00:19:47.432 "uuid": "bcd4880b-8142-5aed-96f1-97d46888b5d6", 00:19:47.432 "is_configured": true, 00:19:47.432 "data_offset": 256, 00:19:47.432 "data_size": 7936 00:19:47.432 } 00:19:47.432 ] 00:19:47.432 }' 00:19:47.432 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.432 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.001 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:48.001 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.001 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.001 [2024-11-20 11:29:30.828884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:48.001 [2024-11-20 11:29:30.829123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.001 [2024-11-20 11:29:30.829211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:48.001 [2024-11-20 11:29:30.829284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.001 [2024-11-20 11:29:30.829642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.001 [2024-11-20 11:29:30.829758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:48.001 [2024-11-20 11:29:30.829884] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:48.001 [2024-11-20 11:29:30.829908] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:48.001 [2024-11-20 11:29:30.829920] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:48.001 [2024-11-20 11:29:30.830006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:48.001 [2024-11-20 11:29:30.846712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:48.001 spare 00:19:48.001 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.001 11:29:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:48.001 [2024-11-20 11:29:30.848766] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:48.940 11:29:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:48.940 11:29:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:48.940 11:29:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:48.940 11:29:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:48.940 11:29:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:48.940 11:29:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.940 11:29:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.940 11:29:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.940 11:29:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.940 11:29:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.940 11:29:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:48.940 "name": "raid_bdev1", 00:19:48.940 "uuid": "d7bc1de0-f7b1-4165-872e-354c6b93e4d1", 00:19:48.940 "strip_size_kb": 0, 00:19:48.940 "state": "online", 00:19:48.940 "raid_level": "raid1", 00:19:48.940 "superblock": true, 00:19:48.940 "num_base_bdevs": 2, 00:19:48.940 "num_base_bdevs_discovered": 2, 00:19:48.940 "num_base_bdevs_operational": 2, 00:19:48.940 "process": { 00:19:48.940 "type": "rebuild", 00:19:48.940 "target": "spare", 00:19:48.940 "progress": { 00:19:48.940 "blocks": 2560, 00:19:48.940 "percent": 32 00:19:48.940 } 00:19:48.940 }, 00:19:48.940 "base_bdevs_list": [ 00:19:48.940 { 00:19:48.940 "name": "spare", 00:19:48.940 "uuid": "3e541e7f-9d5e-52d9-acd8-dc872784d03f", 00:19:48.940 "is_configured": true, 00:19:48.940 "data_offset": 256, 00:19:48.940 "data_size": 7936 00:19:48.940 }, 00:19:48.940 { 00:19:48.940 "name": "BaseBdev2", 00:19:48.940 "uuid": "bcd4880b-8142-5aed-96f1-97d46888b5d6", 00:19:48.940 "is_configured": true, 00:19:48.940 "data_offset": 256, 00:19:48.940 "data_size": 7936 00:19:48.940 } 00:19:48.940 ] 00:19:48.940 }' 00:19:48.940 11:29:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:48.940 11:29:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:48.940 11:29:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:48.940 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:48.940 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:48.940 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.940 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.940 [2024-11-20 11:29:32.008801] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:49.199 [2024-11-20 11:29:32.054501] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:49.199 [2024-11-20 11:29:32.055013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.199 [2024-11-20 11:29:32.055045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:49.199 [2024-11-20 11:29:32.055056] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:49.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:49.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:49.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:49.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:49.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:49.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:49.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.199 "name": "raid_bdev1", 00:19:49.199 "uuid": "d7bc1de0-f7b1-4165-872e-354c6b93e4d1", 00:19:49.199 "strip_size_kb": 0, 00:19:49.199 "state": "online", 00:19:49.199 "raid_level": "raid1", 00:19:49.199 "superblock": true, 00:19:49.199 "num_base_bdevs": 2, 00:19:49.199 "num_base_bdevs_discovered": 1, 00:19:49.199 "num_base_bdevs_operational": 1, 00:19:49.199 "base_bdevs_list": [ 00:19:49.199 { 00:19:49.199 "name": null, 00:19:49.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.199 "is_configured": false, 00:19:49.199 "data_offset": 0, 00:19:49.199 "data_size": 7936 00:19:49.199 }, 00:19:49.199 { 00:19:49.199 "name": "BaseBdev2", 00:19:49.199 "uuid": "bcd4880b-8142-5aed-96f1-97d46888b5d6", 00:19:49.199 "is_configured": true, 00:19:49.199 "data_offset": 256, 00:19:49.199 "data_size": 7936 00:19:49.199 } 00:19:49.199 ] 00:19:49.199 }' 00:19:49.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.199 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.458 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:49.458 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.458 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:49.458 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:49.458 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.458 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.458 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.458 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.458 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.458 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.718 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.718 "name": "raid_bdev1", 00:19:49.718 "uuid": "d7bc1de0-f7b1-4165-872e-354c6b93e4d1", 00:19:49.718 "strip_size_kb": 0, 00:19:49.718 "state": "online", 00:19:49.718 "raid_level": "raid1", 00:19:49.718 "superblock": true, 00:19:49.718 "num_base_bdevs": 2, 00:19:49.718 "num_base_bdevs_discovered": 1, 00:19:49.718 "num_base_bdevs_operational": 1, 00:19:49.719 "base_bdevs_list": [ 00:19:49.719 { 00:19:49.719 "name": null, 00:19:49.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.719 "is_configured": false, 00:19:49.719 "data_offset": 0, 00:19:49.719 "data_size": 7936 00:19:49.719 }, 00:19:49.719 { 00:19:49.719 "name": "BaseBdev2", 00:19:49.719 "uuid": "bcd4880b-8142-5aed-96f1-97d46888b5d6", 00:19:49.719 "is_configured": true, 00:19:49.719 "data_offset": 256, 00:19:49.719 "data_size": 7936 00:19:49.719 } 00:19:49.719 ] 00:19:49.719 }' 00:19:49.719 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.719 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:49.719 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.719 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:49.719 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:49.719 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.719 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.719 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.719 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:49.719 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.719 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.719 [2024-11-20 11:29:32.702233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:49.719 [2024-11-20 11:29:32.702300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:49.719 [2024-11-20 11:29:32.702326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:49.719 [2024-11-20 11:29:32.702336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:49.719 [2024-11-20 11:29:32.702587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:49.719 [2024-11-20 11:29:32.702608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:49.719 [2024-11-20 11:29:32.702675] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:49.719 [2024-11-20 11:29:32.702688] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:49.719 [2024-11-20 11:29:32.702696] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:49.719 [2024-11-20 11:29:32.702707] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:49.719 BaseBdev1 00:19:49.719 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.719 11:29:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:50.657 11:29:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:50.658 11:29:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:50.658 11:29:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:50.658 11:29:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:50.658 11:29:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:50.658 11:29:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:50.658 11:29:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.658 11:29:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.658 11:29:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.658 11:29:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.658 11:29:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.658 11:29:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.658 11:29:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.658 11:29:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.658 11:29:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.658 11:29:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.658 "name": "raid_bdev1", 00:19:50.658 "uuid": "d7bc1de0-f7b1-4165-872e-354c6b93e4d1", 00:19:50.658 "strip_size_kb": 0, 00:19:50.658 "state": "online", 00:19:50.658 "raid_level": "raid1", 00:19:50.658 "superblock": true, 00:19:50.658 "num_base_bdevs": 2, 00:19:50.658 "num_base_bdevs_discovered": 1, 00:19:50.658 "num_base_bdevs_operational": 1, 00:19:50.658 "base_bdevs_list": [ 00:19:50.658 { 00:19:50.658 "name": null, 00:19:50.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.658 "is_configured": false, 00:19:50.658 "data_offset": 0, 00:19:50.658 "data_size": 7936 00:19:50.658 }, 00:19:50.658 { 00:19:50.658 "name": "BaseBdev2", 00:19:50.658 "uuid": "bcd4880b-8142-5aed-96f1-97d46888b5d6", 00:19:50.658 "is_configured": true, 00:19:50.658 "data_offset": 256, 00:19:50.658 "data_size": 7936 00:19:50.658 } 00:19:50.658 ] 00:19:50.658 }' 00:19:50.658 11:29:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.658 11:29:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.297 "name": "raid_bdev1", 00:19:51.297 "uuid": "d7bc1de0-f7b1-4165-872e-354c6b93e4d1", 00:19:51.297 "strip_size_kb": 0, 00:19:51.297 "state": "online", 00:19:51.297 "raid_level": "raid1", 00:19:51.297 "superblock": true, 00:19:51.297 "num_base_bdevs": 2, 00:19:51.297 "num_base_bdevs_discovered": 1, 00:19:51.297 "num_base_bdevs_operational": 1, 00:19:51.297 "base_bdevs_list": [ 00:19:51.297 { 00:19:51.297 "name": null, 00:19:51.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.297 "is_configured": false, 00:19:51.297 "data_offset": 0, 00:19:51.297 "data_size": 7936 00:19:51.297 }, 00:19:51.297 { 00:19:51.297 "name": "BaseBdev2", 00:19:51.297 "uuid": "bcd4880b-8142-5aed-96f1-97d46888b5d6", 00:19:51.297 "is_configured": true, 00:19:51.297 "data_offset": 256, 00:19:51.297 "data_size": 7936 00:19:51.297 } 00:19:51.297 ] 00:19:51.297 }' 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.297 [2024-11-20 11:29:34.331673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:51.297 [2024-11-20 11:29:34.331926] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:51.297 [2024-11-20 11:29:34.332002] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:51.297 request: 00:19:51.297 { 00:19:51.297 "base_bdev": "BaseBdev1", 00:19:51.297 "raid_bdev": "raid_bdev1", 00:19:51.297 "method": "bdev_raid_add_base_bdev", 00:19:51.297 "req_id": 1 00:19:51.297 } 00:19:51.297 Got JSON-RPC error response 00:19:51.297 response: 00:19:51.297 { 00:19:51.297 "code": -22, 00:19:51.297 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:51.297 } 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:51.297 11:29:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:52.236 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:52.236 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:52.236 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:52.236 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:52.236 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:52.236 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:52.236 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.236 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.236 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.497 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.497 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.498 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.498 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.498 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.498 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.498 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.498 "name": "raid_bdev1", 00:19:52.498 "uuid": "d7bc1de0-f7b1-4165-872e-354c6b93e4d1", 00:19:52.498 "strip_size_kb": 0, 00:19:52.498 "state": "online", 00:19:52.498 "raid_level": "raid1", 00:19:52.498 "superblock": true, 00:19:52.498 "num_base_bdevs": 2, 00:19:52.498 "num_base_bdevs_discovered": 1, 00:19:52.498 "num_base_bdevs_operational": 1, 00:19:52.498 "base_bdevs_list": [ 00:19:52.498 { 00:19:52.498 "name": null, 00:19:52.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.498 "is_configured": false, 00:19:52.498 "data_offset": 0, 00:19:52.498 "data_size": 7936 00:19:52.498 }, 00:19:52.498 { 00:19:52.498 "name": "BaseBdev2", 00:19:52.498 "uuid": "bcd4880b-8142-5aed-96f1-97d46888b5d6", 00:19:52.498 "is_configured": true, 00:19:52.498 "data_offset": 256, 00:19:52.498 "data_size": 7936 00:19:52.498 } 00:19:52.498 ] 00:19:52.498 }' 00:19:52.498 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.498 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.758 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:52.758 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:52.758 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:52.758 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:52.758 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:52.758 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.758 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.758 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.758 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.758 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.758 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:52.758 "name": "raid_bdev1", 00:19:52.758 "uuid": "d7bc1de0-f7b1-4165-872e-354c6b93e4d1", 00:19:52.758 "strip_size_kb": 0, 00:19:52.758 "state": "online", 00:19:52.758 "raid_level": "raid1", 00:19:52.758 "superblock": true, 00:19:52.758 "num_base_bdevs": 2, 00:19:52.758 "num_base_bdevs_discovered": 1, 00:19:52.758 "num_base_bdevs_operational": 1, 00:19:52.758 "base_bdevs_list": [ 00:19:52.758 { 00:19:52.758 "name": null, 00:19:52.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.758 "is_configured": false, 00:19:52.758 "data_offset": 0, 00:19:52.758 "data_size": 7936 00:19:52.758 }, 00:19:52.758 { 00:19:52.758 "name": "BaseBdev2", 00:19:52.758 "uuid": "bcd4880b-8142-5aed-96f1-97d46888b5d6", 00:19:52.758 "is_configured": true, 00:19:52.758 "data_offset": 256, 00:19:52.758 "data_size": 7936 00:19:52.758 } 00:19:52.758 ] 00:19:52.758 }' 00:19:52.758 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:52.759 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:52.759 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.018 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:53.018 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88013 00:19:53.018 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88013 ']' 00:19:53.018 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88013 00:19:53.018 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:53.018 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.018 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88013 00:19:53.018 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.018 killing process with pid 88013 00:19:53.018 Received shutdown signal, test time was about 60.000000 seconds 00:19:53.018 00:19:53.018 Latency(us) 00:19:53.018 [2024-11-20T11:29:36.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.018 [2024-11-20T11:29:36.134Z] =================================================================================================================== 00:19:53.018 [2024-11-20T11:29:36.134Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:53.018 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.018 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88013' 00:19:53.018 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88013 00:19:53.018 [2024-11-20 11:29:35.956349] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:53.018 [2024-11-20 11:29:35.956509] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:53.018 [2024-11-20 11:29:35.956567] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:53.018 11:29:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88013 00:19:53.018 [2024-11-20 11:29:35.956581] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:53.278 [2024-11-20 11:29:36.307977] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:54.658 11:29:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:19:54.658 00:19:54.658 real 0m20.321s 00:19:54.658 user 0m26.505s 00:19:54.658 sys 0m2.775s 00:19:54.658 11:29:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:54.658 ************************************ 00:19:54.658 END TEST raid_rebuild_test_sb_md_separate 00:19:54.658 ************************************ 00:19:54.658 11:29:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.658 11:29:37 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:19:54.658 11:29:37 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:19:54.658 11:29:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:54.658 11:29:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:54.658 11:29:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:54.658 ************************************ 00:19:54.658 START TEST raid_state_function_test_sb_md_interleaved 00:19:54.658 ************************************ 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88706 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:54.658 Process raid pid: 88706 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88706' 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88706 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88706 ']' 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.658 11:29:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:54.658 [2024-11-20 11:29:37.658369] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:19:54.658 [2024-11-20 11:29:37.658543] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.918 [2024-11-20 11:29:37.842584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.918 [2024-11-20 11:29:37.966725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.179 [2024-11-20 11:29:38.193932] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:55.179 [2024-11-20 11:29:38.193993] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:55.749 11:29:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.749 11:29:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:55.749 11:29:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:55.749 11:29:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.749 11:29:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:55.749 [2024-11-20 11:29:38.570213] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:55.749 [2024-11-20 11:29:38.570270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:55.749 [2024-11-20 11:29:38.570282] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:55.749 [2024-11-20 11:29:38.570293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:55.749 11:29:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.749 11:29:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:55.749 11:29:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:55.749 11:29:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:55.749 11:29:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:55.749 11:29:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:55.749 11:29:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:55.749 11:29:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:55.749 11:29:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:55.749 11:29:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:55.749 11:29:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:55.749 11:29:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.749 11:29:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:55.749 11:29:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.749 11:29:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:55.749 11:29:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.749 11:29:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:55.749 "name": "Existed_Raid", 00:19:55.749 "uuid": "4fc2cf07-d2a5-4182-b2b7-93bebc8af69c", 00:19:55.749 "strip_size_kb": 0, 00:19:55.749 "state": "configuring", 00:19:55.749 "raid_level": "raid1", 00:19:55.749 "superblock": true, 00:19:55.750 "num_base_bdevs": 2, 00:19:55.750 "num_base_bdevs_discovered": 0, 00:19:55.750 "num_base_bdevs_operational": 2, 00:19:55.750 "base_bdevs_list": [ 00:19:55.750 { 00:19:55.750 "name": "BaseBdev1", 00:19:55.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.750 "is_configured": false, 00:19:55.750 "data_offset": 0, 00:19:55.750 "data_size": 0 00:19:55.750 }, 00:19:55.750 { 00:19:55.750 "name": "BaseBdev2", 00:19:55.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.750 "is_configured": false, 00:19:55.750 "data_offset": 0, 00:19:55.750 "data_size": 0 00:19:55.750 } 00:19:55.750 ] 00:19:55.750 }' 00:19:55.750 11:29:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:55.750 11:29:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:56.008 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:56.008 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.008 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:56.008 [2024-11-20 11:29:39.061321] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:56.008 [2024-11-20 11:29:39.061367] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:56.008 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.008 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:56.008 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.008 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:56.008 [2024-11-20 11:29:39.073281] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:56.008 [2024-11-20 11:29:39.073320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:56.008 [2024-11-20 11:29:39.073328] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:56.008 [2024-11-20 11:29:39.073339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:56.008 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.008 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:19:56.008 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.008 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:56.279 [2024-11-20 11:29:39.122083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:56.279 BaseBdev1 00:19:56.279 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.279 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:56.279 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:56.279 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:56.279 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:19:56.279 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:56.279 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:56.279 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:56.279 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.279 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:56.279 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.279 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:56.279 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.279 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:56.279 [ 00:19:56.279 { 00:19:56.279 "name": "BaseBdev1", 00:19:56.279 "aliases": [ 00:19:56.279 "4034d395-d761-410d-94e8-722d71e741c9" 00:19:56.279 ], 00:19:56.279 "product_name": "Malloc disk", 00:19:56.279 "block_size": 4128, 00:19:56.279 "num_blocks": 8192, 00:19:56.279 "uuid": "4034d395-d761-410d-94e8-722d71e741c9", 00:19:56.279 "md_size": 32, 00:19:56.279 "md_interleave": true, 00:19:56.279 "dif_type": 0, 00:19:56.279 "assigned_rate_limits": { 00:19:56.279 "rw_ios_per_sec": 0, 00:19:56.279 "rw_mbytes_per_sec": 0, 00:19:56.279 "r_mbytes_per_sec": 0, 00:19:56.279 "w_mbytes_per_sec": 0 00:19:56.279 }, 00:19:56.279 "claimed": true, 00:19:56.279 "claim_type": "exclusive_write", 00:19:56.280 "zoned": false, 00:19:56.280 "supported_io_types": { 00:19:56.280 "read": true, 00:19:56.280 "write": true, 00:19:56.280 "unmap": true, 00:19:56.280 "flush": true, 00:19:56.280 "reset": true, 00:19:56.280 "nvme_admin": false, 00:19:56.280 "nvme_io": false, 00:19:56.280 "nvme_io_md": false, 00:19:56.280 "write_zeroes": true, 00:19:56.280 "zcopy": true, 00:19:56.280 "get_zone_info": false, 00:19:56.280 "zone_management": false, 00:19:56.280 "zone_append": false, 00:19:56.280 "compare": false, 00:19:56.280 "compare_and_write": false, 00:19:56.280 "abort": true, 00:19:56.280 "seek_hole": false, 00:19:56.280 "seek_data": false, 00:19:56.280 "copy": true, 00:19:56.280 "nvme_iov_md": false 00:19:56.280 }, 00:19:56.280 "memory_domains": [ 00:19:56.280 { 00:19:56.280 "dma_device_id": "system", 00:19:56.280 "dma_device_type": 1 00:19:56.280 }, 00:19:56.280 { 00:19:56.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.280 "dma_device_type": 2 00:19:56.280 } 00:19:56.280 ], 00:19:56.280 "driver_specific": {} 00:19:56.280 } 00:19:56.280 ] 00:19:56.280 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.280 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:19:56.280 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:56.280 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:56.280 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:56.280 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.280 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.280 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:56.280 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.280 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.280 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.280 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.280 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.280 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:56.280 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.280 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:56.280 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.280 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.280 "name": "Existed_Raid", 00:19:56.280 "uuid": "68fc6f86-28f9-4d98-917a-d76fb5448312", 00:19:56.280 "strip_size_kb": 0, 00:19:56.280 "state": "configuring", 00:19:56.280 "raid_level": "raid1", 00:19:56.280 "superblock": true, 00:19:56.280 "num_base_bdevs": 2, 00:19:56.280 "num_base_bdevs_discovered": 1, 00:19:56.280 "num_base_bdevs_operational": 2, 00:19:56.280 "base_bdevs_list": [ 00:19:56.280 { 00:19:56.280 "name": "BaseBdev1", 00:19:56.280 "uuid": "4034d395-d761-410d-94e8-722d71e741c9", 00:19:56.280 "is_configured": true, 00:19:56.280 "data_offset": 256, 00:19:56.280 "data_size": 7936 00:19:56.280 }, 00:19:56.280 { 00:19:56.280 "name": "BaseBdev2", 00:19:56.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.280 "is_configured": false, 00:19:56.280 "data_offset": 0, 00:19:56.280 "data_size": 0 00:19:56.280 } 00:19:56.280 ] 00:19:56.280 }' 00:19:56.280 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.280 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:56.540 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:56.540 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.540 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:56.540 [2024-11-20 11:29:39.609343] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:56.540 [2024-11-20 11:29:39.609423] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:56.540 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.540 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:56.540 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.540 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:56.540 [2024-11-20 11:29:39.621376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:56.540 [2024-11-20 11:29:39.623280] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:56.540 [2024-11-20 11:29:39.623322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:56.540 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.540 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:56.540 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:56.540 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:56.540 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:56.540 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:56.540 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.540 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.540 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:56.540 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.540 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.540 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.540 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.540 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:56.540 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.540 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.540 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:56.540 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.839 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.839 "name": "Existed_Raid", 00:19:56.839 "uuid": "43bafab4-66be-499c-9100-42e3ed02bf46", 00:19:56.839 "strip_size_kb": 0, 00:19:56.839 "state": "configuring", 00:19:56.839 "raid_level": "raid1", 00:19:56.839 "superblock": true, 00:19:56.839 "num_base_bdevs": 2, 00:19:56.839 "num_base_bdevs_discovered": 1, 00:19:56.839 "num_base_bdevs_operational": 2, 00:19:56.839 "base_bdevs_list": [ 00:19:56.839 { 00:19:56.839 "name": "BaseBdev1", 00:19:56.839 "uuid": "4034d395-d761-410d-94e8-722d71e741c9", 00:19:56.839 "is_configured": true, 00:19:56.839 "data_offset": 256, 00:19:56.839 "data_size": 7936 00:19:56.839 }, 00:19:56.839 { 00:19:56.839 "name": "BaseBdev2", 00:19:56.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.839 "is_configured": false, 00:19:56.839 "data_offset": 0, 00:19:56.839 "data_size": 0 00:19:56.839 } 00:19:56.839 ] 00:19:56.839 }' 00:19:56.839 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.839 11:29:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:57.098 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:19:57.098 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:57.099 [2024-11-20 11:29:40.076127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:57.099 [2024-11-20 11:29:40.076349] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:57.099 [2024-11-20 11:29:40.076363] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:57.099 [2024-11-20 11:29:40.076489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:57.099 [2024-11-20 11:29:40.076573] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:57.099 [2024-11-20 11:29:40.076589] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:57.099 [2024-11-20 11:29:40.076659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.099 BaseBdev2 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:57.099 [ 00:19:57.099 { 00:19:57.099 "name": "BaseBdev2", 00:19:57.099 "aliases": [ 00:19:57.099 "dd6a2641-c52d-4750-9ca6-a0ee92a1ac8e" 00:19:57.099 ], 00:19:57.099 "product_name": "Malloc disk", 00:19:57.099 "block_size": 4128, 00:19:57.099 "num_blocks": 8192, 00:19:57.099 "uuid": "dd6a2641-c52d-4750-9ca6-a0ee92a1ac8e", 00:19:57.099 "md_size": 32, 00:19:57.099 "md_interleave": true, 00:19:57.099 "dif_type": 0, 00:19:57.099 "assigned_rate_limits": { 00:19:57.099 "rw_ios_per_sec": 0, 00:19:57.099 "rw_mbytes_per_sec": 0, 00:19:57.099 "r_mbytes_per_sec": 0, 00:19:57.099 "w_mbytes_per_sec": 0 00:19:57.099 }, 00:19:57.099 "claimed": true, 00:19:57.099 "claim_type": "exclusive_write", 00:19:57.099 "zoned": false, 00:19:57.099 "supported_io_types": { 00:19:57.099 "read": true, 00:19:57.099 "write": true, 00:19:57.099 "unmap": true, 00:19:57.099 "flush": true, 00:19:57.099 "reset": true, 00:19:57.099 "nvme_admin": false, 00:19:57.099 "nvme_io": false, 00:19:57.099 "nvme_io_md": false, 00:19:57.099 "write_zeroes": true, 00:19:57.099 "zcopy": true, 00:19:57.099 "get_zone_info": false, 00:19:57.099 "zone_management": false, 00:19:57.099 "zone_append": false, 00:19:57.099 "compare": false, 00:19:57.099 "compare_and_write": false, 00:19:57.099 "abort": true, 00:19:57.099 "seek_hole": false, 00:19:57.099 "seek_data": false, 00:19:57.099 "copy": true, 00:19:57.099 "nvme_iov_md": false 00:19:57.099 }, 00:19:57.099 "memory_domains": [ 00:19:57.099 { 00:19:57.099 "dma_device_id": "system", 00:19:57.099 "dma_device_type": 1 00:19:57.099 }, 00:19:57.099 { 00:19:57.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.099 "dma_device_type": 2 00:19:57.099 } 00:19:57.099 ], 00:19:57.099 "driver_specific": {} 00:19:57.099 } 00:19:57.099 ] 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.099 "name": "Existed_Raid", 00:19:57.099 "uuid": "43bafab4-66be-499c-9100-42e3ed02bf46", 00:19:57.099 "strip_size_kb": 0, 00:19:57.099 "state": "online", 00:19:57.099 "raid_level": "raid1", 00:19:57.099 "superblock": true, 00:19:57.099 "num_base_bdevs": 2, 00:19:57.099 "num_base_bdevs_discovered": 2, 00:19:57.099 "num_base_bdevs_operational": 2, 00:19:57.099 "base_bdevs_list": [ 00:19:57.099 { 00:19:57.099 "name": "BaseBdev1", 00:19:57.099 "uuid": "4034d395-d761-410d-94e8-722d71e741c9", 00:19:57.099 "is_configured": true, 00:19:57.099 "data_offset": 256, 00:19:57.099 "data_size": 7936 00:19:57.099 }, 00:19:57.099 { 00:19:57.099 "name": "BaseBdev2", 00:19:57.099 "uuid": "dd6a2641-c52d-4750-9ca6-a0ee92a1ac8e", 00:19:57.099 "is_configured": true, 00:19:57.099 "data_offset": 256, 00:19:57.099 "data_size": 7936 00:19:57.099 } 00:19:57.099 ] 00:19:57.099 }' 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.099 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:57.669 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:57.669 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:57.669 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:57.669 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:57.669 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:57.669 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:57.669 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:57.669 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.669 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:57.669 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:57.669 [2024-11-20 11:29:40.587877] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:57.669 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.669 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:57.669 "name": "Existed_Raid", 00:19:57.669 "aliases": [ 00:19:57.669 "43bafab4-66be-499c-9100-42e3ed02bf46" 00:19:57.669 ], 00:19:57.669 "product_name": "Raid Volume", 00:19:57.669 "block_size": 4128, 00:19:57.669 "num_blocks": 7936, 00:19:57.669 "uuid": "43bafab4-66be-499c-9100-42e3ed02bf46", 00:19:57.669 "md_size": 32, 00:19:57.669 "md_interleave": true, 00:19:57.669 "dif_type": 0, 00:19:57.669 "assigned_rate_limits": { 00:19:57.669 "rw_ios_per_sec": 0, 00:19:57.669 "rw_mbytes_per_sec": 0, 00:19:57.669 "r_mbytes_per_sec": 0, 00:19:57.669 "w_mbytes_per_sec": 0 00:19:57.669 }, 00:19:57.669 "claimed": false, 00:19:57.669 "zoned": false, 00:19:57.669 "supported_io_types": { 00:19:57.669 "read": true, 00:19:57.669 "write": true, 00:19:57.669 "unmap": false, 00:19:57.669 "flush": false, 00:19:57.669 "reset": true, 00:19:57.669 "nvme_admin": false, 00:19:57.669 "nvme_io": false, 00:19:57.669 "nvme_io_md": false, 00:19:57.669 "write_zeroes": true, 00:19:57.669 "zcopy": false, 00:19:57.669 "get_zone_info": false, 00:19:57.669 "zone_management": false, 00:19:57.669 "zone_append": false, 00:19:57.669 "compare": false, 00:19:57.669 "compare_and_write": false, 00:19:57.669 "abort": false, 00:19:57.669 "seek_hole": false, 00:19:57.669 "seek_data": false, 00:19:57.669 "copy": false, 00:19:57.669 "nvme_iov_md": false 00:19:57.669 }, 00:19:57.669 "memory_domains": [ 00:19:57.669 { 00:19:57.669 "dma_device_id": "system", 00:19:57.669 "dma_device_type": 1 00:19:57.669 }, 00:19:57.669 { 00:19:57.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.669 "dma_device_type": 2 00:19:57.669 }, 00:19:57.669 { 00:19:57.669 "dma_device_id": "system", 00:19:57.669 "dma_device_type": 1 00:19:57.669 }, 00:19:57.669 { 00:19:57.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.669 "dma_device_type": 2 00:19:57.669 } 00:19:57.669 ], 00:19:57.669 "driver_specific": { 00:19:57.669 "raid": { 00:19:57.669 "uuid": "43bafab4-66be-499c-9100-42e3ed02bf46", 00:19:57.669 "strip_size_kb": 0, 00:19:57.669 "state": "online", 00:19:57.669 "raid_level": "raid1", 00:19:57.669 "superblock": true, 00:19:57.669 "num_base_bdevs": 2, 00:19:57.669 "num_base_bdevs_discovered": 2, 00:19:57.669 "num_base_bdevs_operational": 2, 00:19:57.669 "base_bdevs_list": [ 00:19:57.669 { 00:19:57.669 "name": "BaseBdev1", 00:19:57.669 "uuid": "4034d395-d761-410d-94e8-722d71e741c9", 00:19:57.669 "is_configured": true, 00:19:57.669 "data_offset": 256, 00:19:57.669 "data_size": 7936 00:19:57.669 }, 00:19:57.669 { 00:19:57.669 "name": "BaseBdev2", 00:19:57.669 "uuid": "dd6a2641-c52d-4750-9ca6-a0ee92a1ac8e", 00:19:57.669 "is_configured": true, 00:19:57.669 "data_offset": 256, 00:19:57.669 "data_size": 7936 00:19:57.669 } 00:19:57.669 ] 00:19:57.669 } 00:19:57.669 } 00:19:57.669 }' 00:19:57.669 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:57.669 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:57.669 BaseBdev2' 00:19:57.669 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:57.669 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:57.669 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:57.669 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:57.669 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.669 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:57.669 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:57.669 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.669 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:57.670 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:57.670 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:57.670 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:57.670 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:57.670 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.670 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:57.928 [2024-11-20 11:29:40.827657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.928 "name": "Existed_Raid", 00:19:57.928 "uuid": "43bafab4-66be-499c-9100-42e3ed02bf46", 00:19:57.928 "strip_size_kb": 0, 00:19:57.928 "state": "online", 00:19:57.928 "raid_level": "raid1", 00:19:57.928 "superblock": true, 00:19:57.928 "num_base_bdevs": 2, 00:19:57.928 "num_base_bdevs_discovered": 1, 00:19:57.928 "num_base_bdevs_operational": 1, 00:19:57.928 "base_bdevs_list": [ 00:19:57.928 { 00:19:57.928 "name": null, 00:19:57.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.928 "is_configured": false, 00:19:57.928 "data_offset": 0, 00:19:57.928 "data_size": 7936 00:19:57.928 }, 00:19:57.928 { 00:19:57.928 "name": "BaseBdev2", 00:19:57.928 "uuid": "dd6a2641-c52d-4750-9ca6-a0ee92a1ac8e", 00:19:57.928 "is_configured": true, 00:19:57.928 "data_offset": 256, 00:19:57.928 "data_size": 7936 00:19:57.928 } 00:19:57.928 ] 00:19:57.928 }' 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.928 11:29:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:58.497 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:58.497 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:58.497 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:58.497 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.497 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.497 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:58.497 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.497 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:58.497 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:58.497 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:58.497 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.497 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:58.497 [2024-11-20 11:29:41.501403] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:58.497 [2024-11-20 11:29:41.501536] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:58.497 [2024-11-20 11:29:41.600693] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:58.497 [2024-11-20 11:29:41.600754] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:58.497 [2024-11-20 11:29:41.600766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:58.497 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.497 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:58.497 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:58.497 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.497 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.497 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:58.497 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:58.757 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.757 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:58.757 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:58.757 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:58.757 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88706 00:19:58.757 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88706 ']' 00:19:58.757 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88706 00:19:58.757 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:58.757 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:58.757 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88706 00:19:58.757 killing process with pid 88706 00:19:58.757 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:58.757 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:58.757 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88706' 00:19:58.757 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88706 00:19:58.757 [2024-11-20 11:29:41.699141] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:58.757 11:29:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88706 00:19:58.757 [2024-11-20 11:29:41.716742] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:00.136 11:29:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:20:00.136 00:20:00.136 real 0m5.406s 00:20:00.136 user 0m7.801s 00:20:00.136 sys 0m0.912s 00:20:00.136 ************************************ 00:20:00.136 END TEST raid_state_function_test_sb_md_interleaved 00:20:00.136 ************************************ 00:20:00.136 11:29:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.136 11:29:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:00.136 11:29:43 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:20:00.136 11:29:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:00.136 11:29:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:00.136 11:29:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:00.136 ************************************ 00:20:00.136 START TEST raid_superblock_test_md_interleaved 00:20:00.136 ************************************ 00:20:00.136 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:20:00.136 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:00.136 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:00.136 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:00.136 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:00.136 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:00.136 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:00.136 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:00.136 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:00.136 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:00.136 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:00.136 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:00.136 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:00.136 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:00.136 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:00.136 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:00.136 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88958 00:20:00.136 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:00.136 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88958 00:20:00.136 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88958 ']' 00:20:00.136 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.136 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.136 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.136 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.136 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:00.136 [2024-11-20 11:29:43.125267] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:20:00.136 [2024-11-20 11:29:43.125393] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88958 ] 00:20:00.395 [2024-11-20 11:29:43.300268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.395 [2024-11-20 11:29:43.421905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.655 [2024-11-20 11:29:43.642632] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:00.655 [2024-11-20 11:29:43.642673] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:00.914 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.914 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:00.914 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:00.914 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:00.914 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:00.914 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:00.914 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:00.914 11:29:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:00.914 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:00.914 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:00.914 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:20:00.914 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.914 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.174 malloc1 00:20:01.174 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.174 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:01.174 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.174 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.174 [2024-11-20 11:29:44.054946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:01.174 [2024-11-20 11:29:44.055014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.174 [2024-11-20 11:29:44.055039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:01.174 [2024-11-20 11:29:44.055050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.174 [2024-11-20 11:29:44.057159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.174 [2024-11-20 11:29:44.057200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:01.174 pt1 00:20:01.174 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.174 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:01.174 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:01.174 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:01.174 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:01.174 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:01.174 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:01.174 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:01.174 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:01.174 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:20:01.174 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.174 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.174 malloc2 00:20:01.174 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.174 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:01.174 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.174 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.174 [2024-11-20 11:29:44.111897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:01.174 [2024-11-20 11:29:44.111961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.175 [2024-11-20 11:29:44.112000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:01.175 [2024-11-20 11:29:44.112011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.175 [2024-11-20 11:29:44.114044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.175 [2024-11-20 11:29:44.114080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:01.175 pt2 00:20:01.175 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.175 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:01.175 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:01.175 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:01.175 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.175 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.175 [2024-11-20 11:29:44.123917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:01.175 [2024-11-20 11:29:44.125924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:01.175 [2024-11-20 11:29:44.126122] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:01.175 [2024-11-20 11:29:44.126136] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:01.175 [2024-11-20 11:29:44.126229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:01.175 [2024-11-20 11:29:44.126312] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:01.175 [2024-11-20 11:29:44.126339] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:01.175 [2024-11-20 11:29:44.126420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:01.175 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.175 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:01.175 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:01.175 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:01.175 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:01.175 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:01.175 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:01.175 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.175 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.175 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.175 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.175 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.175 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.175 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.175 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.175 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.175 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.175 "name": "raid_bdev1", 00:20:01.175 "uuid": "12266e61-57b4-4754-92f0-bd440971916b", 00:20:01.175 "strip_size_kb": 0, 00:20:01.175 "state": "online", 00:20:01.175 "raid_level": "raid1", 00:20:01.175 "superblock": true, 00:20:01.175 "num_base_bdevs": 2, 00:20:01.175 "num_base_bdevs_discovered": 2, 00:20:01.175 "num_base_bdevs_operational": 2, 00:20:01.175 "base_bdevs_list": [ 00:20:01.175 { 00:20:01.175 "name": "pt1", 00:20:01.175 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:01.175 "is_configured": true, 00:20:01.175 "data_offset": 256, 00:20:01.175 "data_size": 7936 00:20:01.175 }, 00:20:01.175 { 00:20:01.175 "name": "pt2", 00:20:01.175 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:01.175 "is_configured": true, 00:20:01.175 "data_offset": 256, 00:20:01.175 "data_size": 7936 00:20:01.175 } 00:20:01.175 ] 00:20:01.175 }' 00:20:01.175 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.175 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.743 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:01.743 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:01.743 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:01.743 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:01.743 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:01.743 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:01.743 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:01.743 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:01.743 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.744 [2024-11-20 11:29:44.611965] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:01.744 "name": "raid_bdev1", 00:20:01.744 "aliases": [ 00:20:01.744 "12266e61-57b4-4754-92f0-bd440971916b" 00:20:01.744 ], 00:20:01.744 "product_name": "Raid Volume", 00:20:01.744 "block_size": 4128, 00:20:01.744 "num_blocks": 7936, 00:20:01.744 "uuid": "12266e61-57b4-4754-92f0-bd440971916b", 00:20:01.744 "md_size": 32, 00:20:01.744 "md_interleave": true, 00:20:01.744 "dif_type": 0, 00:20:01.744 "assigned_rate_limits": { 00:20:01.744 "rw_ios_per_sec": 0, 00:20:01.744 "rw_mbytes_per_sec": 0, 00:20:01.744 "r_mbytes_per_sec": 0, 00:20:01.744 "w_mbytes_per_sec": 0 00:20:01.744 }, 00:20:01.744 "claimed": false, 00:20:01.744 "zoned": false, 00:20:01.744 "supported_io_types": { 00:20:01.744 "read": true, 00:20:01.744 "write": true, 00:20:01.744 "unmap": false, 00:20:01.744 "flush": false, 00:20:01.744 "reset": true, 00:20:01.744 "nvme_admin": false, 00:20:01.744 "nvme_io": false, 00:20:01.744 "nvme_io_md": false, 00:20:01.744 "write_zeroes": true, 00:20:01.744 "zcopy": false, 00:20:01.744 "get_zone_info": false, 00:20:01.744 "zone_management": false, 00:20:01.744 "zone_append": false, 00:20:01.744 "compare": false, 00:20:01.744 "compare_and_write": false, 00:20:01.744 "abort": false, 00:20:01.744 "seek_hole": false, 00:20:01.744 "seek_data": false, 00:20:01.744 "copy": false, 00:20:01.744 "nvme_iov_md": false 00:20:01.744 }, 00:20:01.744 "memory_domains": [ 00:20:01.744 { 00:20:01.744 "dma_device_id": "system", 00:20:01.744 "dma_device_type": 1 00:20:01.744 }, 00:20:01.744 { 00:20:01.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.744 "dma_device_type": 2 00:20:01.744 }, 00:20:01.744 { 00:20:01.744 "dma_device_id": "system", 00:20:01.744 "dma_device_type": 1 00:20:01.744 }, 00:20:01.744 { 00:20:01.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.744 "dma_device_type": 2 00:20:01.744 } 00:20:01.744 ], 00:20:01.744 "driver_specific": { 00:20:01.744 "raid": { 00:20:01.744 "uuid": "12266e61-57b4-4754-92f0-bd440971916b", 00:20:01.744 "strip_size_kb": 0, 00:20:01.744 "state": "online", 00:20:01.744 "raid_level": "raid1", 00:20:01.744 "superblock": true, 00:20:01.744 "num_base_bdevs": 2, 00:20:01.744 "num_base_bdevs_discovered": 2, 00:20:01.744 "num_base_bdevs_operational": 2, 00:20:01.744 "base_bdevs_list": [ 00:20:01.744 { 00:20:01.744 "name": "pt1", 00:20:01.744 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:01.744 "is_configured": true, 00:20:01.744 "data_offset": 256, 00:20:01.744 "data_size": 7936 00:20:01.744 }, 00:20:01.744 { 00:20:01.744 "name": "pt2", 00:20:01.744 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:01.744 "is_configured": true, 00:20:01.744 "data_offset": 256, 00:20:01.744 "data_size": 7936 00:20:01.744 } 00:20:01.744 ] 00:20:01.744 } 00:20:01.744 } 00:20:01.744 }' 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:01.744 pt2' 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.744 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:01.744 [2024-11-20 11:29:44.847931] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:02.004 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.004 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=12266e61-57b4-4754-92f0-bd440971916b 00:20:02.004 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 12266e61-57b4-4754-92f0-bd440971916b ']' 00:20:02.004 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:02.004 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.004 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.004 [2024-11-20 11:29:44.891604] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:02.004 [2024-11-20 11:29:44.891637] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:02.004 [2024-11-20 11:29:44.891745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:02.004 [2024-11-20 11:29:44.891815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:02.004 [2024-11-20 11:29:44.891829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:02.004 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.004 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:02.004 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.004 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.004 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.004 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.004 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:02.004 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:02.004 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:02.004 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:02.004 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.004 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.004 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.004 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:02.004 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:02.004 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.005 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.005 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.005 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:02.005 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:02.005 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.005 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.005 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.005 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:02.005 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:02.005 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:20:02.005 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:02.005 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:02.005 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:02.005 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:02.005 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:02.005 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:02.005 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.005 11:29:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.005 [2024-11-20 11:29:45.007643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:02.005 [2024-11-20 11:29:45.009793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:02.005 [2024-11-20 11:29:45.009887] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:02.005 [2024-11-20 11:29:45.009951] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:02.005 [2024-11-20 11:29:45.009974] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:02.005 [2024-11-20 11:29:45.009986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:02.005 request: 00:20:02.005 { 00:20:02.005 "name": "raid_bdev1", 00:20:02.005 "raid_level": "raid1", 00:20:02.005 "base_bdevs": [ 00:20:02.005 "malloc1", 00:20:02.005 "malloc2" 00:20:02.005 ], 00:20:02.005 "superblock": false, 00:20:02.005 "method": "bdev_raid_create", 00:20:02.005 "req_id": 1 00:20:02.005 } 00:20:02.005 Got JSON-RPC error response 00:20:02.005 response: 00:20:02.005 { 00:20:02.005 "code": -17, 00:20:02.005 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:02.005 } 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.005 [2024-11-20 11:29:45.071608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:02.005 [2024-11-20 11:29:45.071675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.005 [2024-11-20 11:29:45.071693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:02.005 [2024-11-20 11:29:45.071706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.005 [2024-11-20 11:29:45.073864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.005 [2024-11-20 11:29:45.073906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:02.005 [2024-11-20 11:29:45.073964] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:02.005 [2024-11-20 11:29:45.074058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:02.005 pt1 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.005 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.264 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.264 "name": "raid_bdev1", 00:20:02.264 "uuid": "12266e61-57b4-4754-92f0-bd440971916b", 00:20:02.264 "strip_size_kb": 0, 00:20:02.264 "state": "configuring", 00:20:02.264 "raid_level": "raid1", 00:20:02.264 "superblock": true, 00:20:02.264 "num_base_bdevs": 2, 00:20:02.264 "num_base_bdevs_discovered": 1, 00:20:02.264 "num_base_bdevs_operational": 2, 00:20:02.264 "base_bdevs_list": [ 00:20:02.264 { 00:20:02.264 "name": "pt1", 00:20:02.264 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:02.264 "is_configured": true, 00:20:02.264 "data_offset": 256, 00:20:02.264 "data_size": 7936 00:20:02.264 }, 00:20:02.264 { 00:20:02.264 "name": null, 00:20:02.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:02.264 "is_configured": false, 00:20:02.264 "data_offset": 256, 00:20:02.264 "data_size": 7936 00:20:02.264 } 00:20:02.264 ] 00:20:02.264 }' 00:20:02.264 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.264 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.523 [2024-11-20 11:29:45.551611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:02.523 [2024-11-20 11:29:45.551695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.523 [2024-11-20 11:29:45.551718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:02.523 [2024-11-20 11:29:45.551729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.523 [2024-11-20 11:29:45.551909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.523 [2024-11-20 11:29:45.551928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:02.523 [2024-11-20 11:29:45.551983] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:02.523 [2024-11-20 11:29:45.552010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:02.523 [2024-11-20 11:29:45.552102] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:02.523 [2024-11-20 11:29:45.552118] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:02.523 [2024-11-20 11:29:45.552198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:02.523 [2024-11-20 11:29:45.552281] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:02.523 [2024-11-20 11:29:45.552295] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:02.523 [2024-11-20 11:29:45.552367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.523 pt2 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.523 "name": "raid_bdev1", 00:20:02.523 "uuid": "12266e61-57b4-4754-92f0-bd440971916b", 00:20:02.523 "strip_size_kb": 0, 00:20:02.523 "state": "online", 00:20:02.523 "raid_level": "raid1", 00:20:02.523 "superblock": true, 00:20:02.523 "num_base_bdevs": 2, 00:20:02.523 "num_base_bdevs_discovered": 2, 00:20:02.523 "num_base_bdevs_operational": 2, 00:20:02.523 "base_bdevs_list": [ 00:20:02.523 { 00:20:02.523 "name": "pt1", 00:20:02.523 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:02.523 "is_configured": true, 00:20:02.523 "data_offset": 256, 00:20:02.523 "data_size": 7936 00:20:02.523 }, 00:20:02.523 { 00:20:02.523 "name": "pt2", 00:20:02.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:02.523 "is_configured": true, 00:20:02.523 "data_offset": 256, 00:20:02.523 "data_size": 7936 00:20:02.523 } 00:20:02.523 ] 00:20:02.523 }' 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.523 11:29:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.092 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:03.092 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:03.092 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:03.092 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:03.092 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:03.092 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:03.092 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:03.092 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:03.092 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.092 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.092 [2024-11-20 11:29:46.051951] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:03.092 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.092 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:03.092 "name": "raid_bdev1", 00:20:03.092 "aliases": [ 00:20:03.092 "12266e61-57b4-4754-92f0-bd440971916b" 00:20:03.092 ], 00:20:03.092 "product_name": "Raid Volume", 00:20:03.092 "block_size": 4128, 00:20:03.092 "num_blocks": 7936, 00:20:03.092 "uuid": "12266e61-57b4-4754-92f0-bd440971916b", 00:20:03.092 "md_size": 32, 00:20:03.092 "md_interleave": true, 00:20:03.092 "dif_type": 0, 00:20:03.092 "assigned_rate_limits": { 00:20:03.092 "rw_ios_per_sec": 0, 00:20:03.092 "rw_mbytes_per_sec": 0, 00:20:03.092 "r_mbytes_per_sec": 0, 00:20:03.092 "w_mbytes_per_sec": 0 00:20:03.092 }, 00:20:03.092 "claimed": false, 00:20:03.092 "zoned": false, 00:20:03.092 "supported_io_types": { 00:20:03.092 "read": true, 00:20:03.092 "write": true, 00:20:03.092 "unmap": false, 00:20:03.092 "flush": false, 00:20:03.092 "reset": true, 00:20:03.092 "nvme_admin": false, 00:20:03.092 "nvme_io": false, 00:20:03.092 "nvme_io_md": false, 00:20:03.092 "write_zeroes": true, 00:20:03.092 "zcopy": false, 00:20:03.092 "get_zone_info": false, 00:20:03.092 "zone_management": false, 00:20:03.092 "zone_append": false, 00:20:03.092 "compare": false, 00:20:03.092 "compare_and_write": false, 00:20:03.092 "abort": false, 00:20:03.092 "seek_hole": false, 00:20:03.092 "seek_data": false, 00:20:03.092 "copy": false, 00:20:03.092 "nvme_iov_md": false 00:20:03.092 }, 00:20:03.092 "memory_domains": [ 00:20:03.092 { 00:20:03.092 "dma_device_id": "system", 00:20:03.092 "dma_device_type": 1 00:20:03.092 }, 00:20:03.092 { 00:20:03.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.092 "dma_device_type": 2 00:20:03.092 }, 00:20:03.092 { 00:20:03.092 "dma_device_id": "system", 00:20:03.092 "dma_device_type": 1 00:20:03.092 }, 00:20:03.092 { 00:20:03.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.092 "dma_device_type": 2 00:20:03.092 } 00:20:03.092 ], 00:20:03.092 "driver_specific": { 00:20:03.092 "raid": { 00:20:03.092 "uuid": "12266e61-57b4-4754-92f0-bd440971916b", 00:20:03.092 "strip_size_kb": 0, 00:20:03.092 "state": "online", 00:20:03.092 "raid_level": "raid1", 00:20:03.092 "superblock": true, 00:20:03.092 "num_base_bdevs": 2, 00:20:03.092 "num_base_bdevs_discovered": 2, 00:20:03.092 "num_base_bdevs_operational": 2, 00:20:03.092 "base_bdevs_list": [ 00:20:03.092 { 00:20:03.092 "name": "pt1", 00:20:03.092 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:03.092 "is_configured": true, 00:20:03.092 "data_offset": 256, 00:20:03.092 "data_size": 7936 00:20:03.092 }, 00:20:03.092 { 00:20:03.092 "name": "pt2", 00:20:03.092 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:03.092 "is_configured": true, 00:20:03.092 "data_offset": 256, 00:20:03.092 "data_size": 7936 00:20:03.092 } 00:20:03.092 ] 00:20:03.092 } 00:20:03.092 } 00:20:03.092 }' 00:20:03.092 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:03.092 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:03.092 pt2' 00:20:03.092 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:03.092 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:03.092 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:03.092 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:03.092 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.092 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:03.092 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.351 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.351 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:03.351 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:03.351 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:03.351 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:03.351 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:03.351 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.351 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.351 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.351 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:03.351 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:03.351 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:03.351 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:03.351 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.351 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.351 [2024-11-20 11:29:46.279902] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:03.351 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.351 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 12266e61-57b4-4754-92f0-bd440971916b '!=' 12266e61-57b4-4754-92f0-bd440971916b ']' 00:20:03.351 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:03.351 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:03.351 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:03.351 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:03.351 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.351 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.351 [2024-11-20 11:29:46.323650] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:03.351 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.352 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:03.352 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:03.352 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:03.352 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:03.352 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:03.352 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:03.352 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.352 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.352 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.352 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.352 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.352 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.352 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.352 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.352 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.352 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.352 "name": "raid_bdev1", 00:20:03.352 "uuid": "12266e61-57b4-4754-92f0-bd440971916b", 00:20:03.352 "strip_size_kb": 0, 00:20:03.352 "state": "online", 00:20:03.352 "raid_level": "raid1", 00:20:03.352 "superblock": true, 00:20:03.352 "num_base_bdevs": 2, 00:20:03.352 "num_base_bdevs_discovered": 1, 00:20:03.352 "num_base_bdevs_operational": 1, 00:20:03.352 "base_bdevs_list": [ 00:20:03.352 { 00:20:03.352 "name": null, 00:20:03.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.352 "is_configured": false, 00:20:03.352 "data_offset": 0, 00:20:03.352 "data_size": 7936 00:20:03.352 }, 00:20:03.352 { 00:20:03.352 "name": "pt2", 00:20:03.352 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:03.352 "is_configured": true, 00:20:03.352 "data_offset": 256, 00:20:03.352 "data_size": 7936 00:20:03.352 } 00:20:03.352 ] 00:20:03.352 }' 00:20:03.352 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.352 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.613 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:03.613 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.613 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.613 [2024-11-20 11:29:46.711596] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:03.613 [2024-11-20 11:29:46.711630] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:03.613 [2024-11-20 11:29:46.711714] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:03.613 [2024-11-20 11:29:46.711765] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:03.613 [2024-11-20 11:29:46.711777] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:03.613 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.613 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.613 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:03.613 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.613 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.873 [2024-11-20 11:29:46.783602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:03.873 [2024-11-20 11:29:46.783668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.873 [2024-11-20 11:29:46.783685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:03.873 [2024-11-20 11:29:46.783695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.873 [2024-11-20 11:29:46.785651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.873 [2024-11-20 11:29:46.785704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:03.873 [2024-11-20 11:29:46.785754] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:03.873 [2024-11-20 11:29:46.785825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:03.873 [2024-11-20 11:29:46.785894] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:03.873 [2024-11-20 11:29:46.785905] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:03.873 [2024-11-20 11:29:46.785994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:03.873 [2024-11-20 11:29:46.786065] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:03.873 [2024-11-20 11:29:46.786073] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:03.873 [2024-11-20 11:29:46.786140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:03.873 pt2 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.873 "name": "raid_bdev1", 00:20:03.873 "uuid": "12266e61-57b4-4754-92f0-bd440971916b", 00:20:03.873 "strip_size_kb": 0, 00:20:03.873 "state": "online", 00:20:03.873 "raid_level": "raid1", 00:20:03.873 "superblock": true, 00:20:03.873 "num_base_bdevs": 2, 00:20:03.873 "num_base_bdevs_discovered": 1, 00:20:03.873 "num_base_bdevs_operational": 1, 00:20:03.873 "base_bdevs_list": [ 00:20:03.873 { 00:20:03.873 "name": null, 00:20:03.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.873 "is_configured": false, 00:20:03.873 "data_offset": 256, 00:20:03.873 "data_size": 7936 00:20:03.873 }, 00:20:03.873 { 00:20:03.873 "name": "pt2", 00:20:03.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:03.873 "is_configured": true, 00:20:03.873 "data_offset": 256, 00:20:03.873 "data_size": 7936 00:20:03.873 } 00:20:03.873 ] 00:20:03.873 }' 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.873 11:29:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.131 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:04.131 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.131 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.131 [2024-11-20 11:29:47.223612] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:04.131 [2024-11-20 11:29:47.223647] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:04.131 [2024-11-20 11:29:47.223737] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:04.131 [2024-11-20 11:29:47.223796] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:04.131 [2024-11-20 11:29:47.223807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:04.132 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.132 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.132 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:04.132 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.132 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.132 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.391 [2024-11-20 11:29:47.291640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:04.391 [2024-11-20 11:29:47.291723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.391 [2024-11-20 11:29:47.291748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:04.391 [2024-11-20 11:29:47.291759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.391 [2024-11-20 11:29:47.293946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.391 [2024-11-20 11:29:47.293989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:04.391 [2024-11-20 11:29:47.294057] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:04.391 [2024-11-20 11:29:47.294110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:04.391 [2024-11-20 11:29:47.294218] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:04.391 [2024-11-20 11:29:47.294238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:04.391 [2024-11-20 11:29:47.294260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:04.391 [2024-11-20 11:29:47.294323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:04.391 [2024-11-20 11:29:47.294403] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:04.391 [2024-11-20 11:29:47.294417] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:04.391 [2024-11-20 11:29:47.294515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:04.391 [2024-11-20 11:29:47.294594] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:04.391 [2024-11-20 11:29:47.294612] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:04.391 [2024-11-20 11:29:47.294700] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.391 pt1 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.391 "name": "raid_bdev1", 00:20:04.391 "uuid": "12266e61-57b4-4754-92f0-bd440971916b", 00:20:04.391 "strip_size_kb": 0, 00:20:04.391 "state": "online", 00:20:04.391 "raid_level": "raid1", 00:20:04.391 "superblock": true, 00:20:04.391 "num_base_bdevs": 2, 00:20:04.391 "num_base_bdevs_discovered": 1, 00:20:04.391 "num_base_bdevs_operational": 1, 00:20:04.391 "base_bdevs_list": [ 00:20:04.391 { 00:20:04.391 "name": null, 00:20:04.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.391 "is_configured": false, 00:20:04.391 "data_offset": 256, 00:20:04.391 "data_size": 7936 00:20:04.391 }, 00:20:04.391 { 00:20:04.391 "name": "pt2", 00:20:04.391 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:04.391 "is_configured": true, 00:20:04.391 "data_offset": 256, 00:20:04.391 "data_size": 7936 00:20:04.391 } 00:20:04.391 ] 00:20:04.391 }' 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.391 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.971 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:04.971 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.971 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.971 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:04.971 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.971 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:04.971 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:04.971 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:04.971 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.971 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.971 [2024-11-20 11:29:47.863915] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:04.971 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.971 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 12266e61-57b4-4754-92f0-bd440971916b '!=' 12266e61-57b4-4754-92f0-bd440971916b ']' 00:20:04.971 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88958 00:20:04.971 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88958 ']' 00:20:04.971 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88958 00:20:04.971 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:04.971 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.971 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88958 00:20:04.971 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:04.971 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:04.971 killing process with pid 88958 00:20:04.971 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88958' 00:20:04.971 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88958 00:20:04.971 [2024-11-20 11:29:47.914237] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:04.971 [2024-11-20 11:29:47.914351] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:04.971 [2024-11-20 11:29:47.914405] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:04.971 [2024-11-20 11:29:47.914422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:04.971 11:29:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88958 00:20:05.231 [2024-11-20 11:29:48.158975] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:06.608 11:29:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:20:06.608 00:20:06.608 real 0m6.406s 00:20:06.608 user 0m9.673s 00:20:06.608 sys 0m1.070s 00:20:06.608 11:29:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:06.608 11:29:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.608 ************************************ 00:20:06.608 END TEST raid_superblock_test_md_interleaved 00:20:06.608 ************************************ 00:20:06.608 11:29:49 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:20:06.608 11:29:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:06.608 11:29:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:06.608 11:29:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:06.608 ************************************ 00:20:06.608 START TEST raid_rebuild_test_sb_md_interleaved 00:20:06.608 ************************************ 00:20:06.608 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:20:06.608 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:06.608 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:06.608 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:06.608 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:06.608 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:20:06.608 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:06.608 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:06.608 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:06.608 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:06.608 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:06.609 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:06.609 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:06.609 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:06.609 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:06.609 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:06.609 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:06.609 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:06.609 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:06.609 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:06.609 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:06.609 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:06.609 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:06.609 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:06.609 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:06.609 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89285 00:20:06.609 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89285 00:20:06.609 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89285 ']' 00:20:06.609 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:06.609 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.609 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.609 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.609 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.609 11:29:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.609 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:06.609 Zero copy mechanism will not be used. 00:20:06.609 [2024-11-20 11:29:49.617720] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:20:06.609 [2024-11-20 11:29:49.617877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89285 ] 00:20:06.868 [2024-11-20 11:29:49.799437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.868 [2024-11-20 11:29:49.935898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.128 [2024-11-20 11:29:50.175267] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:07.128 [2024-11-20 11:29:50.175352] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.698 BaseBdev1_malloc 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.698 [2024-11-20 11:29:50.616769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:07.698 [2024-11-20 11:29:50.616853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.698 [2024-11-20 11:29:50.616882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:07.698 [2024-11-20 11:29:50.616896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.698 [2024-11-20 11:29:50.619163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.698 [2024-11-20 11:29:50.619213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:07.698 BaseBdev1 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.698 BaseBdev2_malloc 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.698 [2024-11-20 11:29:50.679017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:07.698 [2024-11-20 11:29:50.679107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.698 [2024-11-20 11:29:50.679134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:07.698 [2024-11-20 11:29:50.679149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.698 [2024-11-20 11:29:50.681319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.698 [2024-11-20 11:29:50.681364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:07.698 BaseBdev2 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.698 spare_malloc 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.698 spare_delay 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.698 [2024-11-20 11:29:50.762321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:07.698 [2024-11-20 11:29:50.762409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.698 [2024-11-20 11:29:50.762442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:07.698 [2024-11-20 11:29:50.762472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.698 [2024-11-20 11:29:50.764722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.698 [2024-11-20 11:29:50.764774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:07.698 spare 00:20:07.698 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.699 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:07.699 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.699 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.699 [2024-11-20 11:29:50.778353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:07.699 [2024-11-20 11:29:50.780587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:07.699 [2024-11-20 11:29:50.780838] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:07.699 [2024-11-20 11:29:50.780865] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:07.699 [2024-11-20 11:29:50.780986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:07.699 [2024-11-20 11:29:50.781079] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:07.699 [2024-11-20 11:29:50.781088] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:07.699 [2024-11-20 11:29:50.781186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:07.699 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.699 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:07.699 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:07.699 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:07.699 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:07.699 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:07.699 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:07.699 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.699 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.699 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.699 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.699 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.699 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.699 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.699 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.699 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.958 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.958 "name": "raid_bdev1", 00:20:07.958 "uuid": "b786f2f9-67d3-48ad-bacb-9a956ef13276", 00:20:07.958 "strip_size_kb": 0, 00:20:07.958 "state": "online", 00:20:07.958 "raid_level": "raid1", 00:20:07.958 "superblock": true, 00:20:07.958 "num_base_bdevs": 2, 00:20:07.958 "num_base_bdevs_discovered": 2, 00:20:07.958 "num_base_bdevs_operational": 2, 00:20:07.958 "base_bdevs_list": [ 00:20:07.958 { 00:20:07.958 "name": "BaseBdev1", 00:20:07.958 "uuid": "f43d02a7-3e62-5700-a8f4-58667ff1622a", 00:20:07.958 "is_configured": true, 00:20:07.958 "data_offset": 256, 00:20:07.958 "data_size": 7936 00:20:07.958 }, 00:20:07.958 { 00:20:07.958 "name": "BaseBdev2", 00:20:07.958 "uuid": "3fce5910-52ae-5d09-bd0e-1f301124a52e", 00:20:07.958 "is_configured": true, 00:20:07.958 "data_offset": 256, 00:20:07.958 "data_size": 7936 00:20:07.958 } 00:20:07.958 ] 00:20:07.958 }' 00:20:07.958 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.958 11:29:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.217 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:08.217 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:08.217 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.217 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.217 [2024-11-20 11:29:51.273851] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:08.217 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.217 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:08.217 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.217 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:08.217 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.217 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.217 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.477 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:08.477 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:08.477 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:20:08.477 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:08.477 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.477 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.477 [2024-11-20 11:29:51.369331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:08.477 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.477 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:08.477 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:08.477 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.477 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:08.477 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:08.477 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:08.477 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.477 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.477 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.477 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.477 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.477 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.477 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.477 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.477 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.477 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.477 "name": "raid_bdev1", 00:20:08.477 "uuid": "b786f2f9-67d3-48ad-bacb-9a956ef13276", 00:20:08.477 "strip_size_kb": 0, 00:20:08.477 "state": "online", 00:20:08.477 "raid_level": "raid1", 00:20:08.477 "superblock": true, 00:20:08.477 "num_base_bdevs": 2, 00:20:08.477 "num_base_bdevs_discovered": 1, 00:20:08.477 "num_base_bdevs_operational": 1, 00:20:08.477 "base_bdevs_list": [ 00:20:08.477 { 00:20:08.477 "name": null, 00:20:08.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.477 "is_configured": false, 00:20:08.477 "data_offset": 0, 00:20:08.477 "data_size": 7936 00:20:08.477 }, 00:20:08.477 { 00:20:08.477 "name": "BaseBdev2", 00:20:08.477 "uuid": "3fce5910-52ae-5d09-bd0e-1f301124a52e", 00:20:08.477 "is_configured": true, 00:20:08.477 "data_offset": 256, 00:20:08.477 "data_size": 7936 00:20:08.477 } 00:20:08.477 ] 00:20:08.477 }' 00:20:08.477 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.477 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.737 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:08.737 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.737 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.737 [2024-11-20 11:29:51.816648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:08.737 [2024-11-20 11:29:51.839098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:08.737 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.737 11:29:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:08.737 [2024-11-20 11:29:51.841281] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:10.112 11:29:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:10.112 11:29:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:10.112 11:29:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:10.112 11:29:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:10.112 11:29:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:10.112 11:29:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.112 11:29:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.113 11:29:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.113 11:29:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.113 11:29:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.113 11:29:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:10.113 "name": "raid_bdev1", 00:20:10.113 "uuid": "b786f2f9-67d3-48ad-bacb-9a956ef13276", 00:20:10.113 "strip_size_kb": 0, 00:20:10.113 "state": "online", 00:20:10.113 "raid_level": "raid1", 00:20:10.113 "superblock": true, 00:20:10.113 "num_base_bdevs": 2, 00:20:10.113 "num_base_bdevs_discovered": 2, 00:20:10.113 "num_base_bdevs_operational": 2, 00:20:10.113 "process": { 00:20:10.113 "type": "rebuild", 00:20:10.113 "target": "spare", 00:20:10.113 "progress": { 00:20:10.113 "blocks": 2560, 00:20:10.113 "percent": 32 00:20:10.113 } 00:20:10.113 }, 00:20:10.113 "base_bdevs_list": [ 00:20:10.113 { 00:20:10.113 "name": "spare", 00:20:10.113 "uuid": "7b9360f5-037d-52e1-b54f-e1b4f8dae18d", 00:20:10.113 "is_configured": true, 00:20:10.113 "data_offset": 256, 00:20:10.113 "data_size": 7936 00:20:10.113 }, 00:20:10.113 { 00:20:10.113 "name": "BaseBdev2", 00:20:10.113 "uuid": "3fce5910-52ae-5d09-bd0e-1f301124a52e", 00:20:10.113 "is_configured": true, 00:20:10.113 "data_offset": 256, 00:20:10.113 "data_size": 7936 00:20:10.113 } 00:20:10.113 ] 00:20:10.113 }' 00:20:10.113 11:29:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:10.113 11:29:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:10.113 11:29:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:10.113 11:29:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:10.113 11:29:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:10.113 11:29:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.113 11:29:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.113 [2024-11-20 11:29:52.996579] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:10.113 [2024-11-20 11:29:53.047630] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:10.113 [2024-11-20 11:29:53.047768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.113 [2024-11-20 11:29:53.047788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:10.113 [2024-11-20 11:29:53.047804] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:10.113 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.113 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:10.113 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:10.113 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:10.113 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:10.113 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:10.113 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:10.113 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.113 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.113 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.113 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.113 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.113 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.113 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.113 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.113 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.113 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.113 "name": "raid_bdev1", 00:20:10.113 "uuid": "b786f2f9-67d3-48ad-bacb-9a956ef13276", 00:20:10.113 "strip_size_kb": 0, 00:20:10.113 "state": "online", 00:20:10.113 "raid_level": "raid1", 00:20:10.113 "superblock": true, 00:20:10.113 "num_base_bdevs": 2, 00:20:10.113 "num_base_bdevs_discovered": 1, 00:20:10.113 "num_base_bdevs_operational": 1, 00:20:10.113 "base_bdevs_list": [ 00:20:10.113 { 00:20:10.113 "name": null, 00:20:10.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.113 "is_configured": false, 00:20:10.113 "data_offset": 0, 00:20:10.113 "data_size": 7936 00:20:10.113 }, 00:20:10.113 { 00:20:10.113 "name": "BaseBdev2", 00:20:10.113 "uuid": "3fce5910-52ae-5d09-bd0e-1f301124a52e", 00:20:10.113 "is_configured": true, 00:20:10.113 "data_offset": 256, 00:20:10.113 "data_size": 7936 00:20:10.113 } 00:20:10.113 ] 00:20:10.113 }' 00:20:10.113 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.113 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.680 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:10.680 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:10.680 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:10.680 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:10.680 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:10.680 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.680 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.680 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.680 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.680 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.680 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:10.680 "name": "raid_bdev1", 00:20:10.680 "uuid": "b786f2f9-67d3-48ad-bacb-9a956ef13276", 00:20:10.680 "strip_size_kb": 0, 00:20:10.680 "state": "online", 00:20:10.680 "raid_level": "raid1", 00:20:10.680 "superblock": true, 00:20:10.680 "num_base_bdevs": 2, 00:20:10.680 "num_base_bdevs_discovered": 1, 00:20:10.680 "num_base_bdevs_operational": 1, 00:20:10.680 "base_bdevs_list": [ 00:20:10.680 { 00:20:10.680 "name": null, 00:20:10.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.680 "is_configured": false, 00:20:10.680 "data_offset": 0, 00:20:10.680 "data_size": 7936 00:20:10.680 }, 00:20:10.680 { 00:20:10.680 "name": "BaseBdev2", 00:20:10.680 "uuid": "3fce5910-52ae-5d09-bd0e-1f301124a52e", 00:20:10.681 "is_configured": true, 00:20:10.681 "data_offset": 256, 00:20:10.681 "data_size": 7936 00:20:10.681 } 00:20:10.681 ] 00:20:10.681 }' 00:20:10.681 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:10.681 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:10.681 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:10.681 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:10.681 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:10.681 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.681 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.681 [2024-11-20 11:29:53.667640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:10.681 [2024-11-20 11:29:53.687418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:10.681 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.681 11:29:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:10.681 [2024-11-20 11:29:53.689719] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:11.616 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:11.616 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:11.616 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:11.616 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:11.616 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:11.616 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.616 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.616 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.616 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.616 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.874 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:11.874 "name": "raid_bdev1", 00:20:11.874 "uuid": "b786f2f9-67d3-48ad-bacb-9a956ef13276", 00:20:11.874 "strip_size_kb": 0, 00:20:11.874 "state": "online", 00:20:11.874 "raid_level": "raid1", 00:20:11.874 "superblock": true, 00:20:11.874 "num_base_bdevs": 2, 00:20:11.874 "num_base_bdevs_discovered": 2, 00:20:11.874 "num_base_bdevs_operational": 2, 00:20:11.874 "process": { 00:20:11.874 "type": "rebuild", 00:20:11.874 "target": "spare", 00:20:11.874 "progress": { 00:20:11.874 "blocks": 2560, 00:20:11.874 "percent": 32 00:20:11.874 } 00:20:11.874 }, 00:20:11.874 "base_bdevs_list": [ 00:20:11.874 { 00:20:11.874 "name": "spare", 00:20:11.874 "uuid": "7b9360f5-037d-52e1-b54f-e1b4f8dae18d", 00:20:11.874 "is_configured": true, 00:20:11.874 "data_offset": 256, 00:20:11.874 "data_size": 7936 00:20:11.874 }, 00:20:11.874 { 00:20:11.874 "name": "BaseBdev2", 00:20:11.874 "uuid": "3fce5910-52ae-5d09-bd0e-1f301124a52e", 00:20:11.874 "is_configured": true, 00:20:11.874 "data_offset": 256, 00:20:11.874 "data_size": 7936 00:20:11.874 } 00:20:11.874 ] 00:20:11.874 }' 00:20:11.874 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:11.874 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:11.874 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:11.874 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:11.874 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:11.874 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:11.874 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:11.874 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:11.874 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:11.874 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:11.875 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=760 00:20:11.875 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:11.875 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:11.875 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:11.875 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:11.875 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:11.875 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:11.875 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.875 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.875 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.875 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.875 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.875 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:11.875 "name": "raid_bdev1", 00:20:11.875 "uuid": "b786f2f9-67d3-48ad-bacb-9a956ef13276", 00:20:11.875 "strip_size_kb": 0, 00:20:11.875 "state": "online", 00:20:11.875 "raid_level": "raid1", 00:20:11.875 "superblock": true, 00:20:11.875 "num_base_bdevs": 2, 00:20:11.875 "num_base_bdevs_discovered": 2, 00:20:11.875 "num_base_bdevs_operational": 2, 00:20:11.875 "process": { 00:20:11.875 "type": "rebuild", 00:20:11.875 "target": "spare", 00:20:11.875 "progress": { 00:20:11.875 "blocks": 2816, 00:20:11.875 "percent": 35 00:20:11.875 } 00:20:11.875 }, 00:20:11.875 "base_bdevs_list": [ 00:20:11.875 { 00:20:11.875 "name": "spare", 00:20:11.875 "uuid": "7b9360f5-037d-52e1-b54f-e1b4f8dae18d", 00:20:11.875 "is_configured": true, 00:20:11.875 "data_offset": 256, 00:20:11.875 "data_size": 7936 00:20:11.875 }, 00:20:11.875 { 00:20:11.875 "name": "BaseBdev2", 00:20:11.875 "uuid": "3fce5910-52ae-5d09-bd0e-1f301124a52e", 00:20:11.875 "is_configured": true, 00:20:11.875 "data_offset": 256, 00:20:11.875 "data_size": 7936 00:20:11.875 } 00:20:11.875 ] 00:20:11.875 }' 00:20:11.875 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:11.875 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:11.875 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:11.875 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:11.875 11:29:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:13.250 11:29:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:13.250 11:29:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:13.250 11:29:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:13.250 11:29:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:13.250 11:29:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:13.250 11:29:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:13.250 11:29:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.250 11:29:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.250 11:29:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.250 11:29:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.250 11:29:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.250 11:29:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:13.250 "name": "raid_bdev1", 00:20:13.250 "uuid": "b786f2f9-67d3-48ad-bacb-9a956ef13276", 00:20:13.250 "strip_size_kb": 0, 00:20:13.250 "state": "online", 00:20:13.250 "raid_level": "raid1", 00:20:13.250 "superblock": true, 00:20:13.250 "num_base_bdevs": 2, 00:20:13.250 "num_base_bdevs_discovered": 2, 00:20:13.250 "num_base_bdevs_operational": 2, 00:20:13.250 "process": { 00:20:13.250 "type": "rebuild", 00:20:13.250 "target": "spare", 00:20:13.250 "progress": { 00:20:13.250 "blocks": 5632, 00:20:13.250 "percent": 70 00:20:13.250 } 00:20:13.250 }, 00:20:13.250 "base_bdevs_list": [ 00:20:13.250 { 00:20:13.250 "name": "spare", 00:20:13.250 "uuid": "7b9360f5-037d-52e1-b54f-e1b4f8dae18d", 00:20:13.250 "is_configured": true, 00:20:13.250 "data_offset": 256, 00:20:13.250 "data_size": 7936 00:20:13.250 }, 00:20:13.250 { 00:20:13.250 "name": "BaseBdev2", 00:20:13.250 "uuid": "3fce5910-52ae-5d09-bd0e-1f301124a52e", 00:20:13.250 "is_configured": true, 00:20:13.250 "data_offset": 256, 00:20:13.250 "data_size": 7936 00:20:13.250 } 00:20:13.250 ] 00:20:13.250 }' 00:20:13.250 11:29:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:13.250 11:29:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:13.250 11:29:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:13.250 11:29:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:13.250 11:29:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:13.820 [2024-11-20 11:29:56.805519] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:13.820 [2024-11-20 11:29:56.805731] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:13.820 [2024-11-20 11:29:56.805907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:14.080 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:14.080 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:14.080 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:14.080 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:14.080 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:14.080 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:14.080 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.080 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.080 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.080 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.080 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.080 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:14.080 "name": "raid_bdev1", 00:20:14.080 "uuid": "b786f2f9-67d3-48ad-bacb-9a956ef13276", 00:20:14.080 "strip_size_kb": 0, 00:20:14.080 "state": "online", 00:20:14.080 "raid_level": "raid1", 00:20:14.080 "superblock": true, 00:20:14.080 "num_base_bdevs": 2, 00:20:14.080 "num_base_bdevs_discovered": 2, 00:20:14.080 "num_base_bdevs_operational": 2, 00:20:14.080 "base_bdevs_list": [ 00:20:14.080 { 00:20:14.080 "name": "spare", 00:20:14.080 "uuid": "7b9360f5-037d-52e1-b54f-e1b4f8dae18d", 00:20:14.080 "is_configured": true, 00:20:14.080 "data_offset": 256, 00:20:14.080 "data_size": 7936 00:20:14.080 }, 00:20:14.080 { 00:20:14.080 "name": "BaseBdev2", 00:20:14.080 "uuid": "3fce5910-52ae-5d09-bd0e-1f301124a52e", 00:20:14.080 "is_configured": true, 00:20:14.080 "data_offset": 256, 00:20:14.080 "data_size": 7936 00:20:14.080 } 00:20:14.080 ] 00:20:14.080 }' 00:20:14.080 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:14.339 "name": "raid_bdev1", 00:20:14.339 "uuid": "b786f2f9-67d3-48ad-bacb-9a956ef13276", 00:20:14.339 "strip_size_kb": 0, 00:20:14.339 "state": "online", 00:20:14.339 "raid_level": "raid1", 00:20:14.339 "superblock": true, 00:20:14.339 "num_base_bdevs": 2, 00:20:14.339 "num_base_bdevs_discovered": 2, 00:20:14.339 "num_base_bdevs_operational": 2, 00:20:14.339 "base_bdevs_list": [ 00:20:14.339 { 00:20:14.339 "name": "spare", 00:20:14.339 "uuid": "7b9360f5-037d-52e1-b54f-e1b4f8dae18d", 00:20:14.339 "is_configured": true, 00:20:14.339 "data_offset": 256, 00:20:14.339 "data_size": 7936 00:20:14.339 }, 00:20:14.339 { 00:20:14.339 "name": "BaseBdev2", 00:20:14.339 "uuid": "3fce5910-52ae-5d09-bd0e-1f301124a52e", 00:20:14.339 "is_configured": true, 00:20:14.339 "data_offset": 256, 00:20:14.339 "data_size": 7936 00:20:14.339 } 00:20:14.339 ] 00:20:14.339 }' 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.339 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.598 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.598 "name": "raid_bdev1", 00:20:14.598 "uuid": "b786f2f9-67d3-48ad-bacb-9a956ef13276", 00:20:14.598 "strip_size_kb": 0, 00:20:14.598 "state": "online", 00:20:14.598 "raid_level": "raid1", 00:20:14.598 "superblock": true, 00:20:14.598 "num_base_bdevs": 2, 00:20:14.598 "num_base_bdevs_discovered": 2, 00:20:14.598 "num_base_bdevs_operational": 2, 00:20:14.598 "base_bdevs_list": [ 00:20:14.598 { 00:20:14.598 "name": "spare", 00:20:14.598 "uuid": "7b9360f5-037d-52e1-b54f-e1b4f8dae18d", 00:20:14.598 "is_configured": true, 00:20:14.598 "data_offset": 256, 00:20:14.598 "data_size": 7936 00:20:14.598 }, 00:20:14.598 { 00:20:14.598 "name": "BaseBdev2", 00:20:14.598 "uuid": "3fce5910-52ae-5d09-bd0e-1f301124a52e", 00:20:14.598 "is_configured": true, 00:20:14.598 "data_offset": 256, 00:20:14.598 "data_size": 7936 00:20:14.599 } 00:20:14.599 ] 00:20:14.599 }' 00:20:14.599 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.599 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.859 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:14.859 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.859 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.859 [2024-11-20 11:29:57.873225] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:14.859 [2024-11-20 11:29:57.873269] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:14.859 [2024-11-20 11:29:57.873381] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:14.859 [2024-11-20 11:29:57.873481] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:14.859 [2024-11-20 11:29:57.873497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:14.859 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.859 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:20:14.859 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.859 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.859 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.859 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.859 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:14.859 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:20:14.859 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:14.859 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:14.859 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.859 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.859 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.859 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:14.859 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.859 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.859 [2024-11-20 11:29:57.929119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:14.859 [2024-11-20 11:29:57.929261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:14.859 [2024-11-20 11:29:57.929312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:14.859 [2024-11-20 11:29:57.929356] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:14.859 [2024-11-20 11:29:57.931716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:14.859 [2024-11-20 11:29:57.931806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:14.859 [2024-11-20 11:29:57.931920] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:14.859 [2024-11-20 11:29:57.932033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:14.859 [2024-11-20 11:29:57.932213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:14.859 spare 00:20:14.859 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.859 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:14.859 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.859 11:29:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.144 [2024-11-20 11:29:58.032183] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:15.144 [2024-11-20 11:29:58.032357] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:15.144 [2024-11-20 11:29:58.032551] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:15.144 [2024-11-20 11:29:58.032721] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:15.144 [2024-11-20 11:29:58.032764] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:15.144 [2024-11-20 11:29:58.032962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:15.144 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.144 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:15.144 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:15.144 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:15.144 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:15.144 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:15.144 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:15.144 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.144 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.144 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.144 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.144 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.144 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.144 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.144 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.144 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.144 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.144 "name": "raid_bdev1", 00:20:15.144 "uuid": "b786f2f9-67d3-48ad-bacb-9a956ef13276", 00:20:15.144 "strip_size_kb": 0, 00:20:15.144 "state": "online", 00:20:15.144 "raid_level": "raid1", 00:20:15.144 "superblock": true, 00:20:15.144 "num_base_bdevs": 2, 00:20:15.144 "num_base_bdevs_discovered": 2, 00:20:15.144 "num_base_bdevs_operational": 2, 00:20:15.144 "base_bdevs_list": [ 00:20:15.144 { 00:20:15.144 "name": "spare", 00:20:15.144 "uuid": "7b9360f5-037d-52e1-b54f-e1b4f8dae18d", 00:20:15.144 "is_configured": true, 00:20:15.144 "data_offset": 256, 00:20:15.144 "data_size": 7936 00:20:15.144 }, 00:20:15.144 { 00:20:15.144 "name": "BaseBdev2", 00:20:15.144 "uuid": "3fce5910-52ae-5d09-bd0e-1f301124a52e", 00:20:15.144 "is_configured": true, 00:20:15.144 "data_offset": 256, 00:20:15.144 "data_size": 7936 00:20:15.144 } 00:20:15.144 ] 00:20:15.144 }' 00:20:15.144 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.144 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.429 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:15.429 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:15.429 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:15.429 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:15.429 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:15.429 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.429 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.429 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.429 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.429 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.429 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:15.429 "name": "raid_bdev1", 00:20:15.429 "uuid": "b786f2f9-67d3-48ad-bacb-9a956ef13276", 00:20:15.429 "strip_size_kb": 0, 00:20:15.429 "state": "online", 00:20:15.429 "raid_level": "raid1", 00:20:15.429 "superblock": true, 00:20:15.429 "num_base_bdevs": 2, 00:20:15.429 "num_base_bdevs_discovered": 2, 00:20:15.429 "num_base_bdevs_operational": 2, 00:20:15.429 "base_bdevs_list": [ 00:20:15.429 { 00:20:15.429 "name": "spare", 00:20:15.429 "uuid": "7b9360f5-037d-52e1-b54f-e1b4f8dae18d", 00:20:15.429 "is_configured": true, 00:20:15.429 "data_offset": 256, 00:20:15.429 "data_size": 7936 00:20:15.429 }, 00:20:15.429 { 00:20:15.429 "name": "BaseBdev2", 00:20:15.429 "uuid": "3fce5910-52ae-5d09-bd0e-1f301124a52e", 00:20:15.429 "is_configured": true, 00:20:15.429 "data_offset": 256, 00:20:15.429 "data_size": 7936 00:20:15.429 } 00:20:15.429 ] 00:20:15.429 }' 00:20:15.429 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.690 [2024-11-20 11:29:58.680019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.690 "name": "raid_bdev1", 00:20:15.690 "uuid": "b786f2f9-67d3-48ad-bacb-9a956ef13276", 00:20:15.690 "strip_size_kb": 0, 00:20:15.690 "state": "online", 00:20:15.690 "raid_level": "raid1", 00:20:15.690 "superblock": true, 00:20:15.690 "num_base_bdevs": 2, 00:20:15.690 "num_base_bdevs_discovered": 1, 00:20:15.690 "num_base_bdevs_operational": 1, 00:20:15.690 "base_bdevs_list": [ 00:20:15.690 { 00:20:15.690 "name": null, 00:20:15.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.690 "is_configured": false, 00:20:15.690 "data_offset": 0, 00:20:15.690 "data_size": 7936 00:20:15.690 }, 00:20:15.690 { 00:20:15.690 "name": "BaseBdev2", 00:20:15.690 "uuid": "3fce5910-52ae-5d09-bd0e-1f301124a52e", 00:20:15.690 "is_configured": true, 00:20:15.690 "data_offset": 256, 00:20:15.690 "data_size": 7936 00:20:15.690 } 00:20:15.690 ] 00:20:15.690 }' 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.690 11:29:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.268 11:29:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:16.268 11:29:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.268 11:29:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.268 [2024-11-20 11:29:59.171687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:16.268 [2024-11-20 11:29:59.171925] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:16.268 [2024-11-20 11:29:59.171945] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:16.268 [2024-11-20 11:29:59.171998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:16.268 [2024-11-20 11:29:59.191004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:16.268 11:29:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.268 11:29:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:16.268 [2024-11-20 11:29:59.193323] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:17.207 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:17.207 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:17.207 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:17.207 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:17.207 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:17.207 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.207 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.208 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.208 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.208 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.208 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:17.208 "name": "raid_bdev1", 00:20:17.208 "uuid": "b786f2f9-67d3-48ad-bacb-9a956ef13276", 00:20:17.208 "strip_size_kb": 0, 00:20:17.208 "state": "online", 00:20:17.208 "raid_level": "raid1", 00:20:17.208 "superblock": true, 00:20:17.208 "num_base_bdevs": 2, 00:20:17.208 "num_base_bdevs_discovered": 2, 00:20:17.208 "num_base_bdevs_operational": 2, 00:20:17.208 "process": { 00:20:17.208 "type": "rebuild", 00:20:17.208 "target": "spare", 00:20:17.208 "progress": { 00:20:17.208 "blocks": 2560, 00:20:17.208 "percent": 32 00:20:17.208 } 00:20:17.208 }, 00:20:17.208 "base_bdevs_list": [ 00:20:17.208 { 00:20:17.208 "name": "spare", 00:20:17.208 "uuid": "7b9360f5-037d-52e1-b54f-e1b4f8dae18d", 00:20:17.208 "is_configured": true, 00:20:17.208 "data_offset": 256, 00:20:17.208 "data_size": 7936 00:20:17.208 }, 00:20:17.208 { 00:20:17.208 "name": "BaseBdev2", 00:20:17.208 "uuid": "3fce5910-52ae-5d09-bd0e-1f301124a52e", 00:20:17.208 "is_configured": true, 00:20:17.208 "data_offset": 256, 00:20:17.208 "data_size": 7936 00:20:17.208 } 00:20:17.208 ] 00:20:17.208 }' 00:20:17.208 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:17.208 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:17.208 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:17.467 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:17.467 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:17.467 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.467 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.467 [2024-11-20 11:30:00.340708] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:17.467 [2024-11-20 11:30:00.399808] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:17.467 [2024-11-20 11:30:00.399926] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.467 [2024-11-20 11:30:00.399945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:17.467 [2024-11-20 11:30:00.399957] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:17.467 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.467 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:17.467 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:17.467 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:17.467 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:17.467 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:17.467 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:17.467 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.468 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.468 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.468 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.468 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.468 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.468 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.468 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.468 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.468 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.468 "name": "raid_bdev1", 00:20:17.468 "uuid": "b786f2f9-67d3-48ad-bacb-9a956ef13276", 00:20:17.468 "strip_size_kb": 0, 00:20:17.468 "state": "online", 00:20:17.468 "raid_level": "raid1", 00:20:17.468 "superblock": true, 00:20:17.468 "num_base_bdevs": 2, 00:20:17.468 "num_base_bdevs_discovered": 1, 00:20:17.468 "num_base_bdevs_operational": 1, 00:20:17.468 "base_bdevs_list": [ 00:20:17.468 { 00:20:17.468 "name": null, 00:20:17.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.468 "is_configured": false, 00:20:17.468 "data_offset": 0, 00:20:17.468 "data_size": 7936 00:20:17.468 }, 00:20:17.468 { 00:20:17.468 "name": "BaseBdev2", 00:20:17.468 "uuid": "3fce5910-52ae-5d09-bd0e-1f301124a52e", 00:20:17.468 "is_configured": true, 00:20:17.468 "data_offset": 256, 00:20:17.468 "data_size": 7936 00:20:17.468 } 00:20:17.468 ] 00:20:17.468 }' 00:20:17.468 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.468 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.036 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:18.036 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.036 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.036 [2024-11-20 11:30:00.931672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:18.036 [2024-11-20 11:30:00.931767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.036 [2024-11-20 11:30:00.931801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:18.036 [2024-11-20 11:30:00.931815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.036 [2024-11-20 11:30:00.932059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.036 [2024-11-20 11:30:00.932080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:18.036 [2024-11-20 11:30:00.932153] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:18.036 [2024-11-20 11:30:00.932171] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:18.036 [2024-11-20 11:30:00.932182] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:18.036 [2024-11-20 11:30:00.932217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:18.036 [2024-11-20 11:30:00.951957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:18.036 spare 00:20:18.036 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.036 11:30:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:18.036 [2024-11-20 11:30:00.954202] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:18.972 11:30:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:18.972 11:30:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:18.972 11:30:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:18.972 11:30:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:18.972 11:30:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:18.972 11:30:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.972 11:30:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.972 11:30:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.972 11:30:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.972 11:30:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.972 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:18.972 "name": "raid_bdev1", 00:20:18.972 "uuid": "b786f2f9-67d3-48ad-bacb-9a956ef13276", 00:20:18.972 "strip_size_kb": 0, 00:20:18.972 "state": "online", 00:20:18.972 "raid_level": "raid1", 00:20:18.972 "superblock": true, 00:20:18.972 "num_base_bdevs": 2, 00:20:18.972 "num_base_bdevs_discovered": 2, 00:20:18.972 "num_base_bdevs_operational": 2, 00:20:18.972 "process": { 00:20:18.972 "type": "rebuild", 00:20:18.972 "target": "spare", 00:20:18.972 "progress": { 00:20:18.972 "blocks": 2560, 00:20:18.972 "percent": 32 00:20:18.972 } 00:20:18.972 }, 00:20:18.972 "base_bdevs_list": [ 00:20:18.972 { 00:20:18.972 "name": "spare", 00:20:18.972 "uuid": "7b9360f5-037d-52e1-b54f-e1b4f8dae18d", 00:20:18.972 "is_configured": true, 00:20:18.972 "data_offset": 256, 00:20:18.972 "data_size": 7936 00:20:18.972 }, 00:20:18.972 { 00:20:18.972 "name": "BaseBdev2", 00:20:18.972 "uuid": "3fce5910-52ae-5d09-bd0e-1f301124a52e", 00:20:18.972 "is_configured": true, 00:20:18.972 "data_offset": 256, 00:20:18.972 "data_size": 7936 00:20:18.972 } 00:20:18.972 ] 00:20:18.972 }' 00:20:18.972 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:18.972 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:18.972 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.231 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:19.231 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:19.231 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.231 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.231 [2024-11-20 11:30:02.105720] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:19.231 [2024-11-20 11:30:02.161404] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:19.231 [2024-11-20 11:30:02.161710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:19.231 [2024-11-20 11:30:02.161779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:19.231 [2024-11-20 11:30:02.161820] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:19.231 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.231 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:19.231 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:19.231 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:19.231 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:19.231 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:19.231 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:19.231 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.231 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.231 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.231 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.231 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.231 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.231 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.231 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.231 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.231 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.231 "name": "raid_bdev1", 00:20:19.231 "uuid": "b786f2f9-67d3-48ad-bacb-9a956ef13276", 00:20:19.231 "strip_size_kb": 0, 00:20:19.231 "state": "online", 00:20:19.231 "raid_level": "raid1", 00:20:19.231 "superblock": true, 00:20:19.231 "num_base_bdevs": 2, 00:20:19.231 "num_base_bdevs_discovered": 1, 00:20:19.231 "num_base_bdevs_operational": 1, 00:20:19.231 "base_bdevs_list": [ 00:20:19.231 { 00:20:19.231 "name": null, 00:20:19.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.231 "is_configured": false, 00:20:19.231 "data_offset": 0, 00:20:19.231 "data_size": 7936 00:20:19.231 }, 00:20:19.231 { 00:20:19.231 "name": "BaseBdev2", 00:20:19.231 "uuid": "3fce5910-52ae-5d09-bd0e-1f301124a52e", 00:20:19.231 "is_configured": true, 00:20:19.231 "data_offset": 256, 00:20:19.231 "data_size": 7936 00:20:19.231 } 00:20:19.231 ] 00:20:19.231 }' 00:20:19.231 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.231 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.545 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:19.545 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.545 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:19.545 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:19.545 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.545 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.545 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.545 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.545 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.804 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.804 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.804 "name": "raid_bdev1", 00:20:19.804 "uuid": "b786f2f9-67d3-48ad-bacb-9a956ef13276", 00:20:19.804 "strip_size_kb": 0, 00:20:19.804 "state": "online", 00:20:19.804 "raid_level": "raid1", 00:20:19.804 "superblock": true, 00:20:19.804 "num_base_bdevs": 2, 00:20:19.804 "num_base_bdevs_discovered": 1, 00:20:19.804 "num_base_bdevs_operational": 1, 00:20:19.804 "base_bdevs_list": [ 00:20:19.804 { 00:20:19.804 "name": null, 00:20:19.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.804 "is_configured": false, 00:20:19.804 "data_offset": 0, 00:20:19.804 "data_size": 7936 00:20:19.804 }, 00:20:19.804 { 00:20:19.804 "name": "BaseBdev2", 00:20:19.804 "uuid": "3fce5910-52ae-5d09-bd0e-1f301124a52e", 00:20:19.804 "is_configured": true, 00:20:19.804 "data_offset": 256, 00:20:19.804 "data_size": 7936 00:20:19.804 } 00:20:19.804 ] 00:20:19.804 }' 00:20:19.804 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.804 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:19.804 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.804 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:19.804 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:19.804 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.804 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.804 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.804 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:19.804 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.804 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.804 [2024-11-20 11:30:02.804962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:19.804 [2024-11-20 11:30:02.805048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:19.804 [2024-11-20 11:30:02.805082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:19.804 [2024-11-20 11:30:02.805093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:19.804 [2024-11-20 11:30:02.805300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:19.804 [2024-11-20 11:30:02.805314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:19.804 [2024-11-20 11:30:02.805385] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:19.804 [2024-11-20 11:30:02.805399] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:19.804 [2024-11-20 11:30:02.805410] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:19.804 [2024-11-20 11:30:02.805423] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:19.804 BaseBdev1 00:20:19.804 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.804 11:30:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:20.739 11:30:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:20.739 11:30:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:20.739 11:30:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:20.739 11:30:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:20.739 11:30:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:20.739 11:30:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:20.739 11:30:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:20.739 11:30:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:20.739 11:30:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:20.739 11:30:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:20.739 11:30:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.739 11:30:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.739 11:30:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.739 11:30:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.739 11:30:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.739 11:30:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:20.739 "name": "raid_bdev1", 00:20:20.739 "uuid": "b786f2f9-67d3-48ad-bacb-9a956ef13276", 00:20:20.739 "strip_size_kb": 0, 00:20:20.739 "state": "online", 00:20:20.739 "raid_level": "raid1", 00:20:20.739 "superblock": true, 00:20:20.739 "num_base_bdevs": 2, 00:20:20.739 "num_base_bdevs_discovered": 1, 00:20:20.739 "num_base_bdevs_operational": 1, 00:20:20.739 "base_bdevs_list": [ 00:20:20.739 { 00:20:20.739 "name": null, 00:20:20.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.739 "is_configured": false, 00:20:20.739 "data_offset": 0, 00:20:20.739 "data_size": 7936 00:20:20.739 }, 00:20:20.739 { 00:20:20.739 "name": "BaseBdev2", 00:20:20.739 "uuid": "3fce5910-52ae-5d09-bd0e-1f301124a52e", 00:20:20.739 "is_configured": true, 00:20:20.739 "data_offset": 256, 00:20:20.739 "data_size": 7936 00:20:20.739 } 00:20:20.739 ] 00:20:20.739 }' 00:20:20.739 11:30:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:20.739 11:30:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.307 "name": "raid_bdev1", 00:20:21.307 "uuid": "b786f2f9-67d3-48ad-bacb-9a956ef13276", 00:20:21.307 "strip_size_kb": 0, 00:20:21.307 "state": "online", 00:20:21.307 "raid_level": "raid1", 00:20:21.307 "superblock": true, 00:20:21.307 "num_base_bdevs": 2, 00:20:21.307 "num_base_bdevs_discovered": 1, 00:20:21.307 "num_base_bdevs_operational": 1, 00:20:21.307 "base_bdevs_list": [ 00:20:21.307 { 00:20:21.307 "name": null, 00:20:21.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.307 "is_configured": false, 00:20:21.307 "data_offset": 0, 00:20:21.307 "data_size": 7936 00:20:21.307 }, 00:20:21.307 { 00:20:21.307 "name": "BaseBdev2", 00:20:21.307 "uuid": "3fce5910-52ae-5d09-bd0e-1f301124a52e", 00:20:21.307 "is_configured": true, 00:20:21.307 "data_offset": 256, 00:20:21.307 "data_size": 7936 00:20:21.307 } 00:20:21.307 ] 00:20:21.307 }' 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.307 [2024-11-20 11:30:04.411693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:21.307 [2024-11-20 11:30:04.412015] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:21.307 [2024-11-20 11:30:04.412099] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:21.307 request: 00:20:21.307 { 00:20:21.307 "base_bdev": "BaseBdev1", 00:20:21.307 "raid_bdev": "raid_bdev1", 00:20:21.307 "method": "bdev_raid_add_base_bdev", 00:20:21.307 "req_id": 1 00:20:21.307 } 00:20:21.307 Got JSON-RPC error response 00:20:21.307 response: 00:20:21.307 { 00:20:21.307 "code": -22, 00:20:21.307 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:21.307 } 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:21.307 11:30:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:22.684 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:22.684 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:22.684 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:22.684 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:22.684 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:22.684 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:22.684 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.684 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.684 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.684 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.684 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.684 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.684 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.684 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.684 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.684 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.684 "name": "raid_bdev1", 00:20:22.684 "uuid": "b786f2f9-67d3-48ad-bacb-9a956ef13276", 00:20:22.684 "strip_size_kb": 0, 00:20:22.684 "state": "online", 00:20:22.684 "raid_level": "raid1", 00:20:22.684 "superblock": true, 00:20:22.684 "num_base_bdevs": 2, 00:20:22.684 "num_base_bdevs_discovered": 1, 00:20:22.684 "num_base_bdevs_operational": 1, 00:20:22.684 "base_bdevs_list": [ 00:20:22.684 { 00:20:22.684 "name": null, 00:20:22.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.684 "is_configured": false, 00:20:22.684 "data_offset": 0, 00:20:22.684 "data_size": 7936 00:20:22.684 }, 00:20:22.684 { 00:20:22.684 "name": "BaseBdev2", 00:20:22.684 "uuid": "3fce5910-52ae-5d09-bd0e-1f301124a52e", 00:20:22.684 "is_configured": true, 00:20:22.684 "data_offset": 256, 00:20:22.684 "data_size": 7936 00:20:22.684 } 00:20:22.684 ] 00:20:22.684 }' 00:20:22.684 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.684 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.950 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:22.950 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:22.950 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:22.950 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:22.950 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:22.950 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.950 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.950 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.950 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.950 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.950 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:22.950 "name": "raid_bdev1", 00:20:22.950 "uuid": "b786f2f9-67d3-48ad-bacb-9a956ef13276", 00:20:22.950 "strip_size_kb": 0, 00:20:22.950 "state": "online", 00:20:22.950 "raid_level": "raid1", 00:20:22.950 "superblock": true, 00:20:22.950 "num_base_bdevs": 2, 00:20:22.950 "num_base_bdevs_discovered": 1, 00:20:22.950 "num_base_bdevs_operational": 1, 00:20:22.950 "base_bdevs_list": [ 00:20:22.950 { 00:20:22.950 "name": null, 00:20:22.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.950 "is_configured": false, 00:20:22.950 "data_offset": 0, 00:20:22.950 "data_size": 7936 00:20:22.950 }, 00:20:22.950 { 00:20:22.950 "name": "BaseBdev2", 00:20:22.950 "uuid": "3fce5910-52ae-5d09-bd0e-1f301124a52e", 00:20:22.950 "is_configured": true, 00:20:22.950 "data_offset": 256, 00:20:22.950 "data_size": 7936 00:20:22.950 } 00:20:22.950 ] 00:20:22.950 }' 00:20:22.950 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:22.950 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:22.950 11:30:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:22.950 11:30:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:22.951 11:30:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89285 00:20:22.951 11:30:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89285 ']' 00:20:22.951 11:30:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89285 00:20:22.951 11:30:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:22.951 11:30:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:22.951 11:30:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89285 00:20:23.217 killing process with pid 89285 00:20:23.217 Received shutdown signal, test time was about 60.000000 seconds 00:20:23.217 00:20:23.217 Latency(us) 00:20:23.217 [2024-11-20T11:30:06.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.217 [2024-11-20T11:30:06.333Z] =================================================================================================================== 00:20:23.217 [2024-11-20T11:30:06.333Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:23.217 11:30:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:23.217 11:30:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:23.217 11:30:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89285' 00:20:23.217 11:30:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89285 00:20:23.217 11:30:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89285 00:20:23.217 [2024-11-20 11:30:06.066689] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:23.217 [2024-11-20 11:30:06.066865] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:23.217 [2024-11-20 11:30:06.067000] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:23.217 [2024-11-20 11:30:06.067021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:23.475 [2024-11-20 11:30:06.438839] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:24.915 ************************************ 00:20:24.915 END TEST raid_rebuild_test_sb_md_interleaved 00:20:24.915 ************************************ 00:20:24.915 11:30:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:20:24.915 00:20:24.915 real 0m18.253s 00:20:24.915 user 0m24.071s 00:20:24.915 sys 0m1.495s 00:20:24.915 11:30:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:24.915 11:30:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.915 11:30:07 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:20:24.915 11:30:07 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:20:24.915 11:30:07 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89285 ']' 00:20:24.915 11:30:07 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89285 00:20:24.915 11:30:07 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:20:24.915 00:20:24.915 real 12m23.638s 00:20:24.915 user 16m50.086s 00:20:24.915 sys 1m53.431s 00:20:24.915 11:30:07 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:24.915 11:30:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:24.915 ************************************ 00:20:24.915 END TEST bdev_raid 00:20:24.915 ************************************ 00:20:24.915 11:30:07 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:24.915 11:30:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:24.915 11:30:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:24.915 11:30:07 -- common/autotest_common.sh@10 -- # set +x 00:20:24.915 ************************************ 00:20:24.915 START TEST spdkcli_raid 00:20:24.915 ************************************ 00:20:24.915 11:30:07 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:24.915 * Looking for test storage... 00:20:24.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:24.915 11:30:07 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:24.915 11:30:07 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:20:24.915 11:30:07 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:25.175 11:30:08 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:25.175 11:30:08 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:20:25.175 11:30:08 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:25.175 11:30:08 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:25.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.175 --rc genhtml_branch_coverage=1 00:20:25.175 --rc genhtml_function_coverage=1 00:20:25.175 --rc genhtml_legend=1 00:20:25.175 --rc geninfo_all_blocks=1 00:20:25.175 --rc geninfo_unexecuted_blocks=1 00:20:25.175 00:20:25.175 ' 00:20:25.175 11:30:08 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:25.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.175 --rc genhtml_branch_coverage=1 00:20:25.175 --rc genhtml_function_coverage=1 00:20:25.175 --rc genhtml_legend=1 00:20:25.175 --rc geninfo_all_blocks=1 00:20:25.175 --rc geninfo_unexecuted_blocks=1 00:20:25.175 00:20:25.175 ' 00:20:25.175 11:30:08 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:25.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.175 --rc genhtml_branch_coverage=1 00:20:25.175 --rc genhtml_function_coverage=1 00:20:25.175 --rc genhtml_legend=1 00:20:25.175 --rc geninfo_all_blocks=1 00:20:25.175 --rc geninfo_unexecuted_blocks=1 00:20:25.175 00:20:25.175 ' 00:20:25.175 11:30:08 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:25.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.175 --rc genhtml_branch_coverage=1 00:20:25.175 --rc genhtml_function_coverage=1 00:20:25.175 --rc genhtml_legend=1 00:20:25.175 --rc geninfo_all_blocks=1 00:20:25.175 --rc geninfo_unexecuted_blocks=1 00:20:25.176 00:20:25.176 ' 00:20:25.176 11:30:08 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:25.176 11:30:08 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:25.176 11:30:08 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:25.176 11:30:08 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:20:25.176 11:30:08 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:20:25.176 11:30:08 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:20:25.176 11:30:08 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:20:25.176 11:30:08 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:20:25.176 11:30:08 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:20:25.176 11:30:08 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:20:25.176 11:30:08 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:20:25.176 11:30:08 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:20:25.176 11:30:08 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:20:25.176 11:30:08 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:20:25.176 11:30:08 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:20:25.176 11:30:08 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:20:25.176 11:30:08 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:20:25.176 11:30:08 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:20:25.176 11:30:08 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:20:25.176 11:30:08 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:20:25.176 11:30:08 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:20:25.176 11:30:08 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:20:25.176 11:30:08 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:20:25.176 11:30:08 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:20:25.176 11:30:08 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:20:25.176 11:30:08 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:25.176 11:30:08 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:25.176 11:30:08 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:25.176 11:30:08 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:25.176 11:30:08 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:25.176 11:30:08 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:25.176 11:30:08 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:20:25.176 11:30:08 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:20:25.176 11:30:08 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:25.176 11:30:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:25.176 11:30:08 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:20:25.176 11:30:08 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89963 00:20:25.176 11:30:08 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89963 00:20:25.176 11:30:08 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89963 ']' 00:20:25.176 11:30:08 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:20:25.176 11:30:08 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.176 11:30:08 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.176 11:30:08 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.176 11:30:08 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.176 11:30:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:25.176 [2024-11-20 11:30:08.205715] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:20:25.176 [2024-11-20 11:30:08.205954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89963 ] 00:20:25.435 [2024-11-20 11:30:08.389835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:25.435 [2024-11-20 11:30:08.531580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.435 [2024-11-20 11:30:08.531619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.813 11:30:09 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.813 11:30:09 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:20:26.813 11:30:09 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:20:26.813 11:30:09 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:26.813 11:30:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:26.813 11:30:09 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:20:26.813 11:30:09 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:26.813 11:30:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:26.813 11:30:09 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:20:26.813 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:20:26.813 ' 00:20:28.188 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:20:28.188 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:20:28.445 11:30:11 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:20:28.445 11:30:11 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:28.445 11:30:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:28.445 11:30:11 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:20:28.445 11:30:11 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:28.445 11:30:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:28.446 11:30:11 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:20:28.446 ' 00:20:29.823 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:20:29.823 11:30:12 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:20:29.823 11:30:12 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:29.823 11:30:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:29.823 11:30:12 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:20:29.823 11:30:12 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:29.823 11:30:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:29.823 11:30:12 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:20:29.823 11:30:12 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:20:30.388 11:30:13 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:20:30.388 11:30:13 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:20:30.388 11:30:13 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:20:30.388 11:30:13 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:30.388 11:30:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:30.388 11:30:13 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:20:30.388 11:30:13 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:30.388 11:30:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:30.388 11:30:13 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:20:30.388 ' 00:20:31.353 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:20:31.612 11:30:14 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:20:31.612 11:30:14 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:31.612 11:30:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:31.612 11:30:14 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:20:31.612 11:30:14 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:31.612 11:30:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:31.612 11:30:14 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:20:31.612 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:20:31.612 ' 00:20:32.993 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:20:32.993 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:20:33.252 11:30:16 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:20:33.252 11:30:16 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:33.252 11:30:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:33.252 11:30:16 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89963 00:20:33.252 11:30:16 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89963 ']' 00:20:33.252 11:30:16 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89963 00:20:33.252 11:30:16 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:20:33.252 11:30:16 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.252 11:30:16 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89963 00:20:33.252 11:30:16 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:33.252 11:30:16 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:33.252 11:30:16 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89963' 00:20:33.252 killing process with pid 89963 00:20:33.252 11:30:16 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89963 00:20:33.252 11:30:16 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89963 00:20:36.578 11:30:19 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:20:36.578 11:30:19 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89963 ']' 00:20:36.578 11:30:19 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89963 00:20:36.578 11:30:19 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89963 ']' 00:20:36.578 11:30:19 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89963 00:20:36.578 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89963) - No such process 00:20:36.578 11:30:19 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89963 is not found' 00:20:36.578 Process with pid 89963 is not found 00:20:36.578 11:30:19 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:20:36.579 11:30:19 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:20:36.579 11:30:19 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:20:36.579 11:30:19 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:20:36.579 00:20:36.579 real 0m11.204s 00:20:36.579 user 0m23.437s 00:20:36.579 sys 0m1.064s 00:20:36.579 11:30:19 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:36.579 11:30:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:36.579 ************************************ 00:20:36.579 END TEST spdkcli_raid 00:20:36.579 ************************************ 00:20:36.579 11:30:19 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:36.579 11:30:19 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:36.579 11:30:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:36.579 11:30:19 -- common/autotest_common.sh@10 -- # set +x 00:20:36.579 ************************************ 00:20:36.579 START TEST blockdev_raid5f 00:20:36.579 ************************************ 00:20:36.579 11:30:19 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:36.579 * Looking for test storage... 00:20:36.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:36.579 11:30:19 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:36.579 11:30:19 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:36.579 11:30:19 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:20:36.579 11:30:19 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:36.579 11:30:19 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:20:36.579 11:30:19 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:36.579 11:30:19 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:36.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.579 --rc genhtml_branch_coverage=1 00:20:36.579 --rc genhtml_function_coverage=1 00:20:36.579 --rc genhtml_legend=1 00:20:36.579 --rc geninfo_all_blocks=1 00:20:36.579 --rc geninfo_unexecuted_blocks=1 00:20:36.579 00:20:36.579 ' 00:20:36.579 11:30:19 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:36.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.579 --rc genhtml_branch_coverage=1 00:20:36.579 --rc genhtml_function_coverage=1 00:20:36.579 --rc genhtml_legend=1 00:20:36.579 --rc geninfo_all_blocks=1 00:20:36.579 --rc geninfo_unexecuted_blocks=1 00:20:36.579 00:20:36.579 ' 00:20:36.579 11:30:19 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:36.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.579 --rc genhtml_branch_coverage=1 00:20:36.579 --rc genhtml_function_coverage=1 00:20:36.579 --rc genhtml_legend=1 00:20:36.579 --rc geninfo_all_blocks=1 00:20:36.579 --rc geninfo_unexecuted_blocks=1 00:20:36.579 00:20:36.579 ' 00:20:36.579 11:30:19 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:36.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.579 --rc genhtml_branch_coverage=1 00:20:36.579 --rc genhtml_function_coverage=1 00:20:36.579 --rc genhtml_legend=1 00:20:36.579 --rc geninfo_all_blocks=1 00:20:36.579 --rc geninfo_unexecuted_blocks=1 00:20:36.579 00:20:36.579 ' 00:20:36.579 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:36.579 11:30:19 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:20:36.579 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:36.579 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:36.579 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:36.579 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:36.579 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:36.579 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:36.579 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:20:36.579 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:20:36.579 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:20:36.579 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:20:36.579 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:20:36.579 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:20:36.580 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:20:36.580 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:20:36.580 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:20:36.580 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:20:36.580 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:20:36.580 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:20:36.580 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:20:36.580 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:20:36.580 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:20:36.580 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:20:36.580 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90259 00:20:36.580 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:36.580 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90259 00:20:36.580 11:30:19 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90259 ']' 00:20:36.580 11:30:19 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.580 11:30:19 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.580 11:30:19 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:36.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.580 11:30:19 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.580 11:30:19 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.580 11:30:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:36.580 [2024-11-20 11:30:19.499013] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:20:36.580 [2024-11-20 11:30:19.499181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90259 ] 00:20:36.580 [2024-11-20 11:30:19.668642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.858 [2024-11-20 11:30:19.816851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.819 11:30:20 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:37.819 11:30:20 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:20:37.819 11:30:20 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:20:37.819 11:30:20 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:20:37.819 11:30:20 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:20:37.819 11:30:20 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.819 11:30:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:37.819 Malloc0 00:20:38.087 Malloc1 00:20:38.087 Malloc2 00:20:38.087 11:30:20 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.087 11:30:20 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:20:38.087 11:30:20 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.087 11:30:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:38.087 11:30:20 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.087 11:30:20 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:20:38.087 11:30:21 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:20:38.087 11:30:21 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.087 11:30:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:38.087 11:30:21 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.087 11:30:21 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:20:38.087 11:30:21 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.087 11:30:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:38.087 11:30:21 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.087 11:30:21 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:38.087 11:30:21 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.087 11:30:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:38.087 11:30:21 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.087 11:30:21 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:20:38.087 11:30:21 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:20:38.087 11:30:21 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:20:38.087 11:30:21 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.087 11:30:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:38.087 11:30:21 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.087 11:30:21 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:20:38.088 11:30:21 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "92c92785-fc35-4808-beb9-256defe42961"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "92c92785-fc35-4808-beb9-256defe42961",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "92c92785-fc35-4808-beb9-256defe42961",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "97e7f17f-4dee-4c25-a28c-0feceb9f4239",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "63c9c73b-3a6b-4258-a45f-0bd57f0af25b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "490d8ea3-25aa-426b-97d0-cba92a1b6947",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:38.088 11:30:21 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:20:38.088 11:30:21 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:20:38.088 11:30:21 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:20:38.088 11:30:21 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:20:38.088 11:30:21 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90259 00:20:38.088 11:30:21 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90259 ']' 00:20:38.088 11:30:21 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90259 00:20:38.088 11:30:21 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:20:38.088 11:30:21 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.088 11:30:21 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90259 00:20:38.347 11:30:21 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:38.347 11:30:21 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:38.347 killing process with pid 90259 00:20:38.347 11:30:21 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90259' 00:20:38.347 11:30:21 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90259 00:20:38.347 11:30:21 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90259 00:20:41.632 11:30:24 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:41.632 11:30:24 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:41.632 11:30:24 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:41.632 11:30:24 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.632 11:30:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:41.632 ************************************ 00:20:41.632 START TEST bdev_hello_world 00:20:41.632 ************************************ 00:20:41.632 11:30:24 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:41.632 [2024-11-20 11:30:24.570380] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:20:41.632 [2024-11-20 11:30:24.570563] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90328 ] 00:20:41.891 [2024-11-20 11:30:24.747510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.891 [2024-11-20 11:30:24.887409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.457 [2024-11-20 11:30:25.496836] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:42.457 [2024-11-20 11:30:25.496900] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:20:42.457 [2024-11-20 11:30:25.496928] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:42.457 [2024-11-20 11:30:25.497587] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:42.457 [2024-11-20 11:30:25.497797] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:42.457 [2024-11-20 11:30:25.497833] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:42.457 [2024-11-20 11:30:25.497908] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:42.457 00:20:42.457 [2024-11-20 11:30:25.497940] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:44.397 00:20:44.397 real 0m2.713s 00:20:44.397 user 0m2.310s 00:20:44.397 sys 0m0.274s 00:20:44.397 11:30:27 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:44.397 11:30:27 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:44.397 ************************************ 00:20:44.397 END TEST bdev_hello_world 00:20:44.397 ************************************ 00:20:44.397 11:30:27 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:20:44.397 11:30:27 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:44.397 11:30:27 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:44.397 11:30:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:44.397 ************************************ 00:20:44.397 START TEST bdev_bounds 00:20:44.397 ************************************ 00:20:44.397 11:30:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:20:44.397 Process bdevio pid: 90376 00:20:44.397 11:30:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90376 00:20:44.398 11:30:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:44.398 11:30:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:44.398 11:30:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90376' 00:20:44.398 11:30:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90376 00:20:44.398 11:30:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90376 ']' 00:20:44.398 11:30:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.398 11:30:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.398 11:30:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.398 11:30:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.398 11:30:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:44.398 [2024-11-20 11:30:27.329591] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:20:44.398 [2024-11-20 11:30:27.329745] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90376 ] 00:20:44.398 [2024-11-20 11:30:27.509941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:44.656 [2024-11-20 11:30:27.649011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.656 [2024-11-20 11:30:27.649033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.656 [2024-11-20 11:30:27.649040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.222 11:30:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.222 11:30:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:20:45.222 11:30:28 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:20:45.481 I/O targets: 00:20:45.481 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:20:45.481 00:20:45.481 00:20:45.481 CUnit - A unit testing framework for C - Version 2.1-3 00:20:45.481 http://cunit.sourceforge.net/ 00:20:45.481 00:20:45.481 00:20:45.481 Suite: bdevio tests on: raid5f 00:20:45.481 Test: blockdev write read block ...passed 00:20:45.481 Test: blockdev write zeroes read block ...passed 00:20:45.481 Test: blockdev write zeroes read no split ...passed 00:20:45.481 Test: blockdev write zeroes read split ...passed 00:20:45.752 Test: blockdev write zeroes read split partial ...passed 00:20:45.752 Test: blockdev reset ...passed 00:20:45.752 Test: blockdev write read 8 blocks ...passed 00:20:45.752 Test: blockdev write read size > 128k ...passed 00:20:45.752 Test: blockdev write read invalid size ...passed 00:20:45.752 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:45.752 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:45.752 Test: blockdev write read max offset ...passed 00:20:45.752 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:45.752 Test: blockdev writev readv 8 blocks ...passed 00:20:45.752 Test: blockdev writev readv 30 x 1block ...passed 00:20:45.752 Test: blockdev writev readv block ...passed 00:20:45.752 Test: blockdev writev readv size > 128k ...passed 00:20:45.752 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:45.752 Test: blockdev comparev and writev ...passed 00:20:45.752 Test: blockdev nvme passthru rw ...passed 00:20:45.752 Test: blockdev nvme passthru vendor specific ...passed 00:20:45.752 Test: blockdev nvme admin passthru ...passed 00:20:45.752 Test: blockdev copy ...passed 00:20:45.752 00:20:45.752 Run Summary: Type Total Ran Passed Failed Inactive 00:20:45.752 suites 1 1 n/a 0 0 00:20:45.752 tests 23 23 23 0 0 00:20:45.752 asserts 130 130 130 0 n/a 00:20:45.752 00:20:45.752 Elapsed time = 0.726 seconds 00:20:45.752 0 00:20:45.752 11:30:28 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90376 00:20:45.752 11:30:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90376 ']' 00:20:45.752 11:30:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90376 00:20:45.752 11:30:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:20:45.753 11:30:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.753 11:30:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90376 00:20:45.753 killing process with pid 90376 00:20:45.753 11:30:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:45.753 11:30:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:45.753 11:30:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90376' 00:20:45.753 11:30:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90376 00:20:45.753 11:30:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90376 00:20:47.670 ************************************ 00:20:47.670 END TEST bdev_bounds 00:20:47.670 ************************************ 00:20:47.670 11:30:30 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:47.670 00:20:47.670 real 0m3.250s 00:20:47.670 user 0m8.216s 00:20:47.670 sys 0m0.423s 00:20:47.670 11:30:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:47.670 11:30:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:47.670 11:30:30 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:47.670 11:30:30 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:47.670 11:30:30 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:47.670 11:30:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:47.670 ************************************ 00:20:47.670 START TEST bdev_nbd 00:20:47.670 ************************************ 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90442 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90442 /var/tmp/spdk-nbd.sock 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90442 ']' 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:47.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.670 11:30:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:47.670 [2024-11-20 11:30:30.672765] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:20:47.670 [2024-11-20 11:30:30.672959] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.930 [2024-11-20 11:30:30.855351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.930 [2024-11-20 11:30:30.998370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:48.864 1+0 records in 00:20:48.864 1+0 records out 00:20:48.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354867 s, 11.5 MB/s 00:20:48.864 11:30:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:49.123 11:30:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:49.123 11:30:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:49.123 11:30:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:49.123 11:30:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:49.123 11:30:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:49.123 11:30:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:49.123 11:30:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:49.382 11:30:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:49.382 { 00:20:49.382 "nbd_device": "/dev/nbd0", 00:20:49.382 "bdev_name": "raid5f" 00:20:49.382 } 00:20:49.382 ]' 00:20:49.382 11:30:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:49.382 11:30:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:49.382 { 00:20:49.382 "nbd_device": "/dev/nbd0", 00:20:49.382 "bdev_name": "raid5f" 00:20:49.382 } 00:20:49.382 ]' 00:20:49.382 11:30:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:49.382 11:30:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:49.382 11:30:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:49.382 11:30:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:49.382 11:30:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:49.382 11:30:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:49.382 11:30:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:49.382 11:30:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:49.640 11:30:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:49.640 11:30:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:49.640 11:30:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:49.640 11:30:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:49.640 11:30:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:49.640 11:30:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:49.640 11:30:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:49.641 11:30:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:49.641 11:30:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:49.641 11:30:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:49.641 11:30:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:49.899 11:30:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:49.899 11:30:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:49.899 11:30:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:49.899 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:49.899 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:50.157 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:50.157 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:50.157 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:50.157 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:50.157 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:50.157 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:50.157 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:50.157 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:50.157 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:50.157 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:20:50.157 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:50.157 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:20:50.157 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:50.157 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:50.157 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:50.157 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:20:50.157 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:50.157 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:50.157 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:50.157 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:50.157 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:50.157 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:50.157 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:20:50.416 /dev/nbd0 00:20:50.416 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:50.416 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:50.416 11:30:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:50.416 11:30:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:50.416 11:30:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:50.416 11:30:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:50.416 11:30:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:50.416 11:30:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:50.416 11:30:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:50.416 11:30:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:50.416 11:30:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:50.416 1+0 records in 00:20:50.416 1+0 records out 00:20:50.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358981 s, 11.4 MB/s 00:20:50.416 11:30:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:50.416 11:30:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:50.416 11:30:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:50.416 11:30:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:50.416 11:30:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:50.416 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:50.416 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:50.416 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:50.416 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:50.416 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:50.675 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:50.675 { 00:20:50.675 "nbd_device": "/dev/nbd0", 00:20:50.675 "bdev_name": "raid5f" 00:20:50.675 } 00:20:50.675 ]' 00:20:50.675 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:50.675 { 00:20:50.675 "nbd_device": "/dev/nbd0", 00:20:50.675 "bdev_name": "raid5f" 00:20:50.675 } 00:20:50.675 ]' 00:20:50.675 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:50.675 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:20:50.675 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:50.676 256+0 records in 00:20:50.676 256+0 records out 00:20:50.676 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122362 s, 85.7 MB/s 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:50.676 256+0 records in 00:20:50.676 256+0 records out 00:20:50.676 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0415569 s, 25.2 MB/s 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:50.676 11:30:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:50.935 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:50.935 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:50.935 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:50.935 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:50.935 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:50.935 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:50.935 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:50.935 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:50.935 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:50.935 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:50.935 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:51.195 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:51.195 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:51.195 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:51.195 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:51.195 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:51.195 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:51.195 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:51.195 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:51.195 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:51.195 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:51.195 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:51.195 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:51.195 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:51.195 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:51.195 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:51.195 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:51.455 malloc_lvol_verify 00:20:51.713 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:51.713 87e8e211-9317-4b89-b4e3-0d0dee66c3f6 00:20:51.713 11:30:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:51.972 9830b78f-6a30-4173-a148-139f31a2b917 00:20:51.972 11:30:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:52.231 /dev/nbd0 00:20:52.231 11:30:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:52.231 11:30:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:52.231 11:30:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:52.231 11:30:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:52.231 11:30:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:52.231 mke2fs 1.47.0 (5-Feb-2023) 00:20:52.231 Discarding device blocks: 0/4096 done 00:20:52.231 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:52.231 00:20:52.231 Allocating group tables: 0/1 done 00:20:52.231 Writing inode tables: 0/1 done 00:20:52.231 Creating journal (1024 blocks): done 00:20:52.231 Writing superblocks and filesystem accounting information: 0/1 done 00:20:52.231 00:20:52.231 11:30:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:52.231 11:30:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:52.231 11:30:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:52.231 11:30:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:52.231 11:30:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:52.231 11:30:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:52.231 11:30:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:52.502 11:30:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:52.502 11:30:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:52.502 11:30:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:52.502 11:30:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:52.502 11:30:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:52.502 11:30:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:52.502 11:30:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:52.502 11:30:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:52.502 11:30:35 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90442 00:20:52.502 11:30:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90442 ']' 00:20:52.502 11:30:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90442 00:20:52.502 11:30:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:20:52.502 11:30:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:52.502 11:30:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90442 00:20:52.502 11:30:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:52.502 11:30:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:52.502 killing process with pid 90442 00:20:52.502 11:30:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90442' 00:20:52.502 11:30:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90442 00:20:52.502 11:30:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90442 00:20:54.411 11:30:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:54.411 00:20:54.411 real 0m6.798s 00:20:54.411 user 0m9.548s 00:20:54.411 sys 0m1.398s 00:20:54.411 11:30:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:54.411 11:30:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:54.411 ************************************ 00:20:54.411 END TEST bdev_nbd 00:20:54.411 ************************************ 00:20:54.411 11:30:37 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:20:54.411 11:30:37 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:20:54.411 11:30:37 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:20:54.411 11:30:37 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:20:54.411 11:30:37 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:54.411 11:30:37 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:54.411 11:30:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:54.411 ************************************ 00:20:54.411 START TEST bdev_fio 00:20:54.411 ************************************ 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:20:54.411 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:54.411 11:30:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:54.670 ************************************ 00:20:54.670 START TEST bdev_fio_rw_verify 00:20:54.670 ************************************ 00:20:54.670 11:30:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:54.671 11:30:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:54.671 11:30:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:54.671 11:30:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:54.671 11:30:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:54.671 11:30:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:54.671 11:30:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:20:54.671 11:30:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:54.671 11:30:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:54.671 11:30:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:20:54.671 11:30:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:54.671 11:30:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:54.671 11:30:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:54.671 11:30:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:54.671 11:30:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:20:54.671 11:30:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:54.671 11:30:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:54.928 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:54.928 fio-3.35 00:20:54.928 Starting 1 thread 00:21:07.157 00:21:07.157 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90655: Wed Nov 20 11:30:48 2024 00:21:07.157 read: IOPS=8345, BW=32.6MiB/s (34.2MB/s)(326MiB/10001msec) 00:21:07.157 slat (nsec): min=22115, max=93603, avg=28410.56, stdev=3910.45 00:21:07.157 clat (usec): min=12, max=463, avg=189.02, stdev=69.50 00:21:07.157 lat (usec): min=35, max=506, avg=217.43, stdev=70.40 00:21:07.157 clat percentiles (usec): 00:21:07.157 | 50.000th=[ 188], 99.000th=[ 338], 99.900th=[ 392], 99.990th=[ 433], 00:21:07.157 | 99.999th=[ 465] 00:21:07.157 write: IOPS=8717, BW=34.1MiB/s (35.7MB/s)(336MiB/9873msec); 0 zone resets 00:21:07.157 slat (usec): min=9, max=154, avg=25.12, stdev= 6.89 00:21:07.157 clat (usec): min=85, max=886, avg=437.78, stdev=71.91 00:21:07.157 lat (usec): min=110, max=923, avg=462.90, stdev=74.75 00:21:07.157 clat percentiles (usec): 00:21:07.157 | 50.000th=[ 441], 99.000th=[ 660], 99.900th=[ 725], 99.990th=[ 824], 00:21:07.157 | 99.999th=[ 889] 00:21:07.157 bw ( KiB/s): min=28304, max=42200, per=98.52%, avg=34353.42, stdev=3119.71, samples=19 00:21:07.157 iops : min= 7076, max=10550, avg=8588.16, stdev=779.99, samples=19 00:21:07.157 lat (usec) : 20=0.01%, 100=6.46%, 250=31.90%, 500=53.75%, 750=7.87% 00:21:07.157 lat (usec) : 1000=0.02% 00:21:07.157 cpu : usr=98.69%, sys=0.43%, ctx=23, majf=0, minf=7247 00:21:07.157 IO depths : 1=7.8%, 2=20.0%, 4=55.0%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:07.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.157 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.157 issued rwts: total=83463,86067,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.157 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:07.157 00:21:07.157 Run status group 0 (all jobs): 00:21:07.157 READ: bw=32.6MiB/s (34.2MB/s), 32.6MiB/s-32.6MiB/s (34.2MB/s-34.2MB/s), io=326MiB (342MB), run=10001-10001msec 00:21:07.157 WRITE: bw=34.1MiB/s (35.7MB/s), 34.1MiB/s-34.1MiB/s (35.7MB/s-35.7MB/s), io=336MiB (353MB), run=9873-9873msec 00:21:07.727 ----------------------------------------------------- 00:21:07.727 Suppressions used: 00:21:07.727 count bytes template 00:21:07.727 1 7 /usr/src/fio/parse.c 00:21:07.727 24 2304 /usr/src/fio/iolog.c 00:21:07.727 1 8 libtcmalloc_minimal.so 00:21:07.727 1 904 libcrypto.so 00:21:07.727 ----------------------------------------------------- 00:21:07.727 00:21:07.727 00:21:07.727 real 0m13.151s 00:21:07.727 user 0m13.379s 00:21:07.727 sys 0m0.787s 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:21:07.727 ************************************ 00:21:07.727 END TEST bdev_fio_rw_verify 00:21:07.727 ************************************ 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "92c92785-fc35-4808-beb9-256defe42961"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "92c92785-fc35-4808-beb9-256defe42961",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "92c92785-fc35-4808-beb9-256defe42961",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "97e7f17f-4dee-4c25-a28c-0feceb9f4239",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "63c9c73b-3a6b-4258-a45f-0bd57f0af25b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "490d8ea3-25aa-426b-97d0-cba92a1b6947",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:07.727 /home/vagrant/spdk_repo/spdk 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:21:07.727 00:21:07.727 real 0m13.380s 00:21:07.727 user 0m13.489s 00:21:07.727 sys 0m0.880s 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:07.727 11:30:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:07.727 ************************************ 00:21:07.728 END TEST bdev_fio 00:21:07.728 ************************************ 00:21:07.728 11:30:50 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:07.728 11:30:50 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:07.728 11:30:50 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:07.728 11:30:50 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:07.728 11:30:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:07.728 ************************************ 00:21:07.728 START TEST bdev_verify 00:21:07.728 ************************************ 00:21:07.728 11:30:50 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:07.988 [2024-11-20 11:30:50.933928] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:21:07.988 [2024-11-20 11:30:50.934108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90819 ] 00:21:08.247 [2024-11-20 11:30:51.117578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:08.247 [2024-11-20 11:30:51.258960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.247 [2024-11-20 11:30:51.258998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.814 Running I/O for 5 seconds... 00:21:11.179 9480.00 IOPS, 37.03 MiB/s [2024-11-20T11:30:55.245Z] 10563.50 IOPS, 41.26 MiB/s [2024-11-20T11:30:56.182Z] 11110.33 IOPS, 43.40 MiB/s [2024-11-20T11:30:57.120Z] 11157.75 IOPS, 43.58 MiB/s [2024-11-20T11:30:57.120Z] 11189.20 IOPS, 43.71 MiB/s 00:21:14.004 Latency(us) 00:21:14.004 [2024-11-20T11:30:57.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.004 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:14.004 Verification LBA range: start 0x0 length 0x2000 00:21:14.004 raid5f : 5.02 5543.90 21.66 0.00 0.00 34658.32 2060.52 39149.89 00:21:14.004 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:14.004 Verification LBA range: start 0x2000 length 0x2000 00:21:14.004 raid5f : 5.01 5618.39 21.95 0.00 0.00 34148.36 296.92 30449.91 00:21:14.004 [2024-11-20T11:30:57.120Z] =================================================================================================================== 00:21:14.004 [2024-11-20T11:30:57.120Z] Total : 11162.29 43.60 0.00 0.00 34401.80 296.92 39149.89 00:21:15.909 00:21:15.909 real 0m7.745s 00:21:15.909 user 0m14.201s 00:21:15.909 sys 0m0.303s 00:21:15.909 11:30:58 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:15.909 11:30:58 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:21:15.909 ************************************ 00:21:15.909 END TEST bdev_verify 00:21:15.909 ************************************ 00:21:15.909 11:30:58 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:15.909 11:30:58 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:15.909 11:30:58 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:15.909 11:30:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:15.909 ************************************ 00:21:15.909 START TEST bdev_verify_big_io 00:21:15.909 ************************************ 00:21:15.909 11:30:58 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:15.909 [2024-11-20 11:30:58.710858] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:21:15.909 [2024-11-20 11:30:58.710985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90919 ] 00:21:15.909 [2024-11-20 11:30:58.890238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:16.168 [2024-11-20 11:30:59.030268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.168 [2024-11-20 11:30:59.030279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.737 Running I/O for 5 seconds... 00:21:19.052 506.00 IOPS, 31.62 MiB/s [2024-11-20T11:31:02.734Z] 569.00 IOPS, 35.56 MiB/s [2024-11-20T11:31:04.109Z] 592.00 IOPS, 37.00 MiB/s [2024-11-20T11:31:05.044Z] 571.00 IOPS, 35.69 MiB/s [2024-11-20T11:31:05.044Z] 595.80 IOPS, 37.24 MiB/s 00:21:21.928 Latency(us) 00:21:21.928 [2024-11-20T11:31:05.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.928 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:21.928 Verification LBA range: start 0x0 length 0x200 00:21:21.928 raid5f : 5.31 298.96 18.69 0.00 0.00 10303241.09 211.06 483535.48 00:21:21.928 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:21.928 Verification LBA range: start 0x200 length 0x200 00:21:21.928 raid5f : 5.30 299.12 18.69 0.00 0.00 10322103.08 336.27 479872.34 00:21:21.928 [2024-11-20T11:31:05.044Z] =================================================================================================================== 00:21:21.928 [2024-11-20T11:31:05.044Z] Total : 598.08 37.38 0.00 0.00 10312672.09 211.06 483535.48 00:21:23.830 00:21:23.830 real 0m8.032s 00:21:23.830 user 0m14.784s 00:21:23.830 sys 0m0.300s 00:21:23.830 11:31:06 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.830 11:31:06 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:23.830 ************************************ 00:21:23.830 END TEST bdev_verify_big_io 00:21:23.830 ************************************ 00:21:23.830 11:31:06 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:23.830 11:31:06 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:23.830 11:31:06 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:23.830 11:31:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:23.830 ************************************ 00:21:23.830 START TEST bdev_write_zeroes 00:21:23.830 ************************************ 00:21:23.830 11:31:06 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:23.830 [2024-11-20 11:31:06.832222] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:21:23.830 [2024-11-20 11:31:06.832439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91022 ] 00:21:24.090 [2024-11-20 11:31:07.014212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.090 [2024-11-20 11:31:07.158690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.662 Running I/O for 1 seconds... 00:21:26.040 18591.00 IOPS, 72.62 MiB/s 00:21:26.040 Latency(us) 00:21:26.040 [2024-11-20T11:31:09.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.040 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:26.040 raid5f : 1.01 18571.95 72.55 0.00 0.00 6863.81 2017.59 9157.87 00:21:26.040 [2024-11-20T11:31:09.156Z] =================================================================================================================== 00:21:26.040 [2024-11-20T11:31:09.156Z] Total : 18571.95 72.55 0.00 0.00 6863.81 2017.59 9157.87 00:21:27.418 00:21:27.418 real 0m3.757s 00:21:27.418 user 0m3.343s 00:21:27.418 sys 0m0.279s 00:21:27.418 11:31:10 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:27.418 11:31:10 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:27.418 ************************************ 00:21:27.418 END TEST bdev_write_zeroes 00:21:27.418 ************************************ 00:21:27.418 11:31:10 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:27.418 11:31:10 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:27.418 11:31:10 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:27.418 11:31:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:27.678 ************************************ 00:21:27.678 START TEST bdev_json_nonenclosed 00:21:27.678 ************************************ 00:21:27.678 11:31:10 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:27.678 [2024-11-20 11:31:10.622598] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:21:27.678 [2024-11-20 11:31:10.622727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91082 ] 00:21:27.678 [2024-11-20 11:31:10.789322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.937 [2024-11-20 11:31:10.914485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.937 [2024-11-20 11:31:10.914583] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:27.937 [2024-11-20 11:31:10.914610] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:27.937 [2024-11-20 11:31:10.914621] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:28.196 00:21:28.196 real 0m0.668s 00:21:28.196 user 0m0.437s 00:21:28.196 sys 0m0.125s 00:21:28.196 11:31:11 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:28.196 11:31:11 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:28.196 ************************************ 00:21:28.196 END TEST bdev_json_nonenclosed 00:21:28.196 ************************************ 00:21:28.196 11:31:11 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:28.196 11:31:11 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:28.196 11:31:11 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:28.196 11:31:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:28.196 ************************************ 00:21:28.196 START TEST bdev_json_nonarray 00:21:28.196 ************************************ 00:21:28.196 11:31:11 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:28.456 [2024-11-20 11:31:11.360820] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:21:28.456 [2024-11-20 11:31:11.360957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91102 ] 00:21:28.456 [2024-11-20 11:31:11.525489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.716 [2024-11-20 11:31:11.668068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.716 [2024-11-20 11:31:11.668184] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:28.716 [2024-11-20 11:31:11.668209] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:28.716 [2024-11-20 11:31:11.668233] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:28.976 00:21:28.976 real 0m0.689s 00:21:28.976 user 0m0.468s 00:21:28.976 sys 0m0.116s 00:21:28.976 11:31:11 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:28.976 11:31:11 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:21:28.976 ************************************ 00:21:28.976 END TEST bdev_json_nonarray 00:21:28.976 ************************************ 00:21:28.976 11:31:12 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:21:28.976 11:31:12 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:21:28.976 11:31:12 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:21:28.976 11:31:12 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:21:28.976 11:31:12 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:21:28.976 11:31:12 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:28.976 11:31:12 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:28.977 11:31:12 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:21:28.977 11:31:12 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:21:28.977 11:31:12 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:21:28.977 11:31:12 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:21:28.977 00:21:28.977 real 0m52.847s 00:21:28.977 user 1m12.142s 00:21:28.977 sys 0m5.160s 00:21:28.977 11:31:12 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:28.977 11:31:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:28.977 ************************************ 00:21:28.977 END TEST blockdev_raid5f 00:21:28.977 ************************************ 00:21:28.977 11:31:12 -- spdk/autotest.sh@194 -- # uname -s 00:21:28.977 11:31:12 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:21:28.977 11:31:12 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:28.977 11:31:12 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:28.977 11:31:12 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:21:28.977 11:31:12 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:21:28.977 11:31:12 -- spdk/autotest.sh@260 -- # timing_exit lib 00:21:28.977 11:31:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:28.977 11:31:12 -- common/autotest_common.sh@10 -- # set +x 00:21:29.236 11:31:12 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:21:29.236 11:31:12 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:21:29.236 11:31:12 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:21:29.236 11:31:12 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:29.236 11:31:12 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:29.236 11:31:12 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:29.236 11:31:12 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:29.236 11:31:12 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:29.236 11:31:12 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:29.236 11:31:12 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:29.236 11:31:12 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:29.236 11:31:12 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:21:29.236 11:31:12 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:29.236 11:31:12 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:21:29.236 11:31:12 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:29.236 11:31:12 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:29.236 11:31:12 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:21:29.236 11:31:12 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:21:29.236 11:31:12 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:21:29.236 11:31:12 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:21:29.236 11:31:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:29.236 11:31:12 -- common/autotest_common.sh@10 -- # set +x 00:21:29.236 11:31:12 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:21:29.236 11:31:12 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:21:29.236 11:31:12 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:21:29.236 11:31:12 -- common/autotest_common.sh@10 -- # set +x 00:21:31.144 INFO: APP EXITING 00:21:31.144 INFO: killing all VMs 00:21:31.144 INFO: killing vhost app 00:21:31.144 INFO: EXIT DONE 00:21:31.144 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:31.144 Waiting for block devices as requested 00:21:31.144 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:31.144 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:32.081 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:32.081 Cleaning 00:21:32.081 Removing: /var/run/dpdk/spdk0/config 00:21:32.081 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:32.081 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:32.081 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:32.081 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:32.081 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:32.081 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:32.081 Removing: /dev/shm/spdk_tgt_trace.pid56915 00:21:32.081 Removing: /var/run/dpdk/spdk0 00:21:32.081 Removing: /var/run/dpdk/spdk_pid56663 00:21:32.081 Removing: /var/run/dpdk/spdk_pid56915 00:21:32.081 Removing: /var/run/dpdk/spdk_pid57144 00:21:32.081 Removing: /var/run/dpdk/spdk_pid57259 00:21:32.081 Removing: /var/run/dpdk/spdk_pid57304 00:21:32.081 Removing: /var/run/dpdk/spdk_pid57443 00:21:32.340 Removing: /var/run/dpdk/spdk_pid57461 00:21:32.340 Removing: /var/run/dpdk/spdk_pid57671 00:21:32.340 Removing: /var/run/dpdk/spdk_pid57788 00:21:32.340 Removing: /var/run/dpdk/spdk_pid57895 00:21:32.340 Removing: /var/run/dpdk/spdk_pid58017 00:21:32.340 Removing: /var/run/dpdk/spdk_pid58131 00:21:32.340 Removing: /var/run/dpdk/spdk_pid58165 00:21:32.340 Removing: /var/run/dpdk/spdk_pid58207 00:21:32.340 Removing: /var/run/dpdk/spdk_pid58283 00:21:32.340 Removing: /var/run/dpdk/spdk_pid58389 00:21:32.340 Removing: /var/run/dpdk/spdk_pid58858 00:21:32.340 Removing: /var/run/dpdk/spdk_pid58939 00:21:32.340 Removing: /var/run/dpdk/spdk_pid59015 00:21:32.340 Removing: /var/run/dpdk/spdk_pid59031 00:21:32.340 Removing: /var/run/dpdk/spdk_pid59191 00:21:32.340 Removing: /var/run/dpdk/spdk_pid59207 00:21:32.340 Removing: /var/run/dpdk/spdk_pid59370 00:21:32.340 Removing: /var/run/dpdk/spdk_pid59392 00:21:32.340 Removing: /var/run/dpdk/spdk_pid59467 00:21:32.340 Removing: /var/run/dpdk/spdk_pid59485 00:21:32.340 Removing: /var/run/dpdk/spdk_pid59560 00:21:32.340 Removing: /var/run/dpdk/spdk_pid59578 00:21:32.340 Removing: /var/run/dpdk/spdk_pid59784 00:21:32.340 Removing: /var/run/dpdk/spdk_pid59826 00:21:32.340 Removing: /var/run/dpdk/spdk_pid59915 00:21:32.340 Removing: /var/run/dpdk/spdk_pid61297 00:21:32.340 Removing: /var/run/dpdk/spdk_pid61509 00:21:32.340 Removing: /var/run/dpdk/spdk_pid61654 00:21:32.340 Removing: /var/run/dpdk/spdk_pid62303 00:21:32.340 Removing: /var/run/dpdk/spdk_pid62520 00:21:32.340 Removing: /var/run/dpdk/spdk_pid62666 00:21:32.340 Removing: /var/run/dpdk/spdk_pid63317 00:21:32.340 Removing: /var/run/dpdk/spdk_pid63647 00:21:32.340 Removing: /var/run/dpdk/spdk_pid63791 00:21:32.340 Removing: /var/run/dpdk/spdk_pid65190 00:21:32.340 Removing: /var/run/dpdk/spdk_pid65443 00:21:32.340 Removing: /var/run/dpdk/spdk_pid65588 00:21:32.340 Removing: /var/run/dpdk/spdk_pid66991 00:21:32.340 Removing: /var/run/dpdk/spdk_pid67243 00:21:32.340 Removing: /var/run/dpdk/spdk_pid67386 00:21:32.340 Removing: /var/run/dpdk/spdk_pid68777 00:21:32.340 Removing: /var/run/dpdk/spdk_pid69223 00:21:32.340 Removing: /var/run/dpdk/spdk_pid69370 00:21:32.340 Removing: /var/run/dpdk/spdk_pid70866 00:21:32.340 Removing: /var/run/dpdk/spdk_pid71128 00:21:32.340 Removing: /var/run/dpdk/spdk_pid71274 00:21:32.340 Removing: /var/run/dpdk/spdk_pid72769 00:21:32.340 Removing: /var/run/dpdk/spdk_pid73039 00:21:32.340 Removing: /var/run/dpdk/spdk_pid73184 00:21:32.340 Removing: /var/run/dpdk/spdk_pid74676 00:21:32.340 Removing: /var/run/dpdk/spdk_pid75163 00:21:32.340 Removing: /var/run/dpdk/spdk_pid75309 00:21:32.340 Removing: /var/run/dpdk/spdk_pid75458 00:21:32.340 Removing: /var/run/dpdk/spdk_pid75883 00:21:32.340 Removing: /var/run/dpdk/spdk_pid76620 00:21:32.340 Removing: /var/run/dpdk/spdk_pid77015 00:21:32.340 Removing: /var/run/dpdk/spdk_pid77705 00:21:32.340 Removing: /var/run/dpdk/spdk_pid78151 00:21:32.340 Removing: /var/run/dpdk/spdk_pid78910 00:21:32.340 Removing: /var/run/dpdk/spdk_pid79325 00:21:32.340 Removing: /var/run/dpdk/spdk_pid81311 00:21:32.340 Removing: /var/run/dpdk/spdk_pid81760 00:21:32.340 Removing: /var/run/dpdk/spdk_pid82212 00:21:32.340 Removing: /var/run/dpdk/spdk_pid84326 00:21:32.340 Removing: /var/run/dpdk/spdk_pid84817 00:21:32.599 Removing: /var/run/dpdk/spdk_pid85340 00:21:32.599 Removing: /var/run/dpdk/spdk_pid86403 00:21:32.599 Removing: /var/run/dpdk/spdk_pid86732 00:21:32.599 Removing: /var/run/dpdk/spdk_pid87685 00:21:32.599 Removing: /var/run/dpdk/spdk_pid88013 00:21:32.599 Removing: /var/run/dpdk/spdk_pid88958 00:21:32.599 Removing: /var/run/dpdk/spdk_pid89285 00:21:32.599 Removing: /var/run/dpdk/spdk_pid89963 00:21:32.599 Removing: /var/run/dpdk/spdk_pid90259 00:21:32.599 Removing: /var/run/dpdk/spdk_pid90328 00:21:32.599 Removing: /var/run/dpdk/spdk_pid90376 00:21:32.599 Removing: /var/run/dpdk/spdk_pid90640 00:21:32.599 Removing: /var/run/dpdk/spdk_pid90819 00:21:32.599 Removing: /var/run/dpdk/spdk_pid90919 00:21:32.599 Removing: /var/run/dpdk/spdk_pid91022 00:21:32.599 Removing: /var/run/dpdk/spdk_pid91082 00:21:32.599 Removing: /var/run/dpdk/spdk_pid91102 00:21:32.599 Clean 00:21:32.599 11:31:15 -- common/autotest_common.sh@1453 -- # return 0 00:21:32.599 11:31:15 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:21:32.599 11:31:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:32.599 11:31:15 -- common/autotest_common.sh@10 -- # set +x 00:21:32.599 11:31:15 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:21:32.599 11:31:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:32.599 11:31:15 -- common/autotest_common.sh@10 -- # set +x 00:21:32.599 11:31:15 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:32.599 11:31:15 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:32.599 11:31:15 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:32.599 11:31:15 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:21:32.599 11:31:15 -- spdk/autotest.sh@398 -- # hostname 00:21:32.599 11:31:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:32.858 geninfo: WARNING: invalid characters removed from testname! 00:21:59.442 11:31:40 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:01.343 11:31:44 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:03.250 11:31:46 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:05.831 11:31:48 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:08.368 11:31:51 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:10.294 11:31:53 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:12.825 11:31:55 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:12.825 11:31:55 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:12.825 11:31:55 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:22:12.826 11:31:55 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:12.826 11:31:55 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:12.826 11:31:55 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:12.826 + [[ -n 5434 ]] 00:22:12.826 + sudo kill 5434 00:22:12.835 [Pipeline] } 00:22:12.851 [Pipeline] // timeout 00:22:12.856 [Pipeline] } 00:22:12.871 [Pipeline] // stage 00:22:12.876 [Pipeline] } 00:22:12.892 [Pipeline] // catchError 00:22:12.903 [Pipeline] stage 00:22:12.905 [Pipeline] { (Stop VM) 00:22:12.920 [Pipeline] sh 00:22:13.200 + vagrant halt 00:22:16.481 ==> default: Halting domain... 00:22:24.611 [Pipeline] sh 00:22:24.894 + vagrant destroy -f 00:22:28.203 ==> default: Removing domain... 00:22:28.215 [Pipeline] sh 00:22:28.502 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:22:28.511 [Pipeline] } 00:22:28.527 [Pipeline] // stage 00:22:28.533 [Pipeline] } 00:22:28.548 [Pipeline] // dir 00:22:28.554 [Pipeline] } 00:22:28.569 [Pipeline] // wrap 00:22:28.576 [Pipeline] } 00:22:28.590 [Pipeline] // catchError 00:22:28.600 [Pipeline] stage 00:22:28.603 [Pipeline] { (Epilogue) 00:22:28.616 [Pipeline] sh 00:22:28.900 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:35.486 [Pipeline] catchError 00:22:35.489 [Pipeline] { 00:22:35.504 [Pipeline] sh 00:22:35.791 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:35.791 Artifacts sizes are good 00:22:35.801 [Pipeline] } 00:22:35.815 [Pipeline] // catchError 00:22:35.827 [Pipeline] archiveArtifacts 00:22:35.834 Archiving artifacts 00:22:35.963 [Pipeline] cleanWs 00:22:35.978 [WS-CLEANUP] Deleting project workspace... 00:22:35.978 [WS-CLEANUP] Deferred wipeout is used... 00:22:35.986 [WS-CLEANUP] done 00:22:35.988 [Pipeline] } 00:22:36.002 [Pipeline] // stage 00:22:36.007 [Pipeline] } 00:22:36.020 [Pipeline] // node 00:22:36.026 [Pipeline] End of Pipeline 00:22:36.073 Finished: SUCCESS